- About these Docs
- Synopsis
- Assertion Testing
- Buffer
- C/C++ Addons
- Child Processes
- Cluster
- Command Line Options
- Console
- Crypto
- Debugger
- DNS
- Domain
- Errors
- Events
- File System
- Globals
- HTTP
- HTTPS
- Modules
- Net
- OS
- Path
- Process
- Punycode
- Query Strings
- Readline
- REPL
- Stream
- String Decoder
- Timers
- TLS/SSL
- TTY
- UDP/Datagram
- URL
- Utilities
- V8
- VM
- ZLIB
Node.js v5.11.0 Documentation
Table of Contents
- About this Documentation
- Synopsis
- Addons
- Assert
- assert(value[, message])
- assert.deepEqual(actual, expected[, message])
- assert.deepStrictEqual(actual, expected[, message])
- assert.doesNotThrow(block[, error][, message])
- assert.equal(actual, expected[, message])
- assert.fail(actual, expected, message, operator)
- assert.ifError(value)
- assert.notDeepEqual(actual, expected[, message])
- assert.notDeepStrictEqual(actual, expected[, message])
- assert.notEqual(actual, expected[, message])
- assert.notStrictEqual(actual, expected[, message])
- assert.ok(value[, message])
- assert.strictEqual(actual, expected[, message])
- assert.throws(block[, error][, message])
- Buffer
Buffer.from()
,Buffer.alloc()
, andBuffer.allocUnsafe()
- Buffers and Character Encodings
- Buffers and TypedArray
- Buffers and ES6 iteration
- The
--zero-fill-buffers
command line option - Class: Buffer
- new Buffer(array)
- new Buffer(buffer)
- new Buffer(arrayBuffer[, byteOffset[, length]])
- new Buffer(size)
- new Buffer(str[, encoding])
- Class Method: Buffer.alloc(size[, fill[, encoding]])
- Class Method: Buffer.allocUnsafe(size)
- Class Method: Buffer.byteLength(string[, encoding])
- Class Method: Buffer.compare(buf1, buf2)
- Class Method: Buffer.concat(list[, totalLength])
- Class Method: Buffer.from(array)
- Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])
- Class Method: Buffer.from(buffer)
- Class Method: Buffer.from(str[, encoding])
- Class Method: Buffer.isBuffer(obj)
- Class Method: Buffer.isEncoding(encoding)
- buf[index]
- buf.compare(target[, targetStart[, targetEnd[, sourceStart[, sourceEnd]]]])
- buf.copy(targetBuffer[, targetStart[, sourceStart[, sourceEnd]]])
- buf.entries()
- buf.equals(otherBuffer)
- buf.fill(value[, offset[, end]][, encoding])
- buf.indexOf(value[, byteOffset][, encoding])
- buf.includes(value[, byteOffset][, encoding])
- buf.keys()
- buf.length
- buf.readDoubleBE(offset[, noAssert])
- buf.readDoubleLE(offset[, noAssert])
- buf.readFloatBE(offset[, noAssert])
- buf.readFloatLE(offset[, noAssert])
- buf.readInt8(offset[, noAssert])
- buf.readInt16BE(offset[, noAssert])
- buf.readInt16LE(offset[, noAssert])
- buf.readInt32BE(offset[, noAssert])
- buf.readInt32LE(offset[, noAssert])
- buf.readIntBE(offset, byteLength[, noAssert])
- buf.readIntLE(offset, byteLength[, noAssert])
- buf.readUInt8(offset[, noAssert])
- buf.readUInt16BE(offset[, noAssert])
- buf.readUInt16LE(offset[, noAssert])
- buf.readUInt32BE(offset[, noAssert])
- buf.readUInt32LE(offset[, noAssert])
- buf.readUIntBE(offset, byteLength[, noAssert])
- buf.readUIntLE(offset, byteLength[, noAssert])
- buf.slice([start[, end]])
- buf.swap16()
- buf.swap32()
- buf.toString([encoding[, start[, end]]])
- buf.toJSON()
- buf.values()
- buf.write(string[, offset[, length]][, encoding])
- buf.writeDoubleBE(value, offset[, noAssert])
- buf.writeDoubleLE(value, offset[, noAssert])
- buf.writeFloatBE(value, offset[, noAssert])
- buf.writeFloatLE(value, offset[, noAssert])
- buf.writeInt8(value, offset[, noAssert])
- buf.writeInt16BE(value, offset[, noAssert])
- buf.writeInt16LE(value, offset[, noAssert])
- buf.writeInt32BE(value, offset[, noAssert])
- buf.writeInt32LE(value, offset[, noAssert])
- buf.writeIntBE(value, offset, byteLength[, noAssert])
- buf.writeIntLE(value, offset, byteLength[, noAssert])
- buf.writeUInt8(value, offset[, noAssert])
- buf.writeUInt16BE(value, offset[, noAssert])
- buf.writeUInt16LE(value, offset[, noAssert])
- buf.writeUInt32BE(value, offset[, noAssert])
- buf.writeUInt32LE(value, offset[, noAssert])
- buf.writeUIntBE(value, offset, byteLength[, noAssert])
- buf.writeUIntLE(value, offset, byteLength[, noAssert])
- buffer.INSPECT_MAX_BYTES
- Class: SlowBuffer
- Child Process
- Asynchronous Process Creation
- Synchronous Process Creation
- Class: ChildProcess
maxBuffer
and Unicode
- Cluster
- How It Works
- Class: Worker
- Event: 'disconnect'
- Event: 'exit'
- Event: 'fork'
- Event: 'listening'
- Event: 'message'
- Event: 'online'
- Event: 'setup'
- cluster.disconnect([callback])
- cluster.fork([env])
- cluster.isMaster
- cluster.isWorker
- cluster.schedulingPolicy
- cluster.settings
- cluster.setupMaster([settings])
- cluster.worker
- cluster.workers
- Command Line Options
- Synopsis
- Options
-v
,--version
-h
,--help
-e
,--eval "script"
-p
,--print "script"
-c
,--check
-i
,--interactive
-r
,--require module
--no-deprecation
--trace-deprecation
--throw-deprecation
--trace-sync-io
--zero-fill-buffers
--track-heap-objects
--zero-fill-buffers
--prof-process
--v8-options
--tls-cipher-list=list
--enable-fips
--force-fips
--icu-data-dir=file
- Environment Variables
- Console
- Crypto
- Class: Certificate
- Class: Cipher
- Class: Decipher
- Class: DiffieHellman
- diffieHellman.computeSecret(other_public_key[, input_encoding][, output_encoding])
- diffieHellman.generateKeys([encoding])
- diffieHellman.getGenerator([encoding])
- diffieHellman.getPrime([encoding])
- diffieHellman.getPrivateKey([encoding])
- diffieHellman.getPublicKey([encoding])
- diffieHellman.setPrivateKey(private_key[, encoding])
- diffieHellman.setPublicKey(public_key[, encoding])
- diffieHellman.verifyError
- Class: ECDH
- Class: Hash
- Class: Hmac
- Class: Sign
- Class: Verify
crypto
module methods and properties- crypto.DEFAULT_ENCODING
- crypto.createCipher(algorithm, password)
- crypto.createCipheriv(algorithm, key, iv)
- crypto.createCredentials(details)
- crypto.createDecipher(algorithm, password)
- crypto.createDecipheriv(algorithm, key, iv)
- crypto.createDiffieHellman(prime[, prime_encoding][, generator][, generator_encoding])
- crypto.createDiffieHellman(prime_length[, generator])
- crypto.createECDH(curve_name)
- crypto.createHash(algorithm)
- crypto.createHmac(algorithm, key)
- crypto.createSign(algorithm)
- crypto.createVerify(algorithm)
- crypto.getCiphers()
- crypto.getCurves()
- crypto.getDiffieHellman(group_name)
- crypto.getHashes()
- crypto.pbkdf2(password, salt, iterations, keylen[, digest], callback)
- crypto.pbkdf2Sync(password, salt, iterations, keylen[, digest])
- crypto.privateDecrypt(private_key, buffer)
- crypto.privateEncrypt(private_key, buffer)
- crypto.publicDecrypt(public_key, buffer)
- crypto.publicEncrypt(public_key, buffer)
- crypto.randomBytes(size[, callback])
- crypto.setEngine(engine[, flags])
- Notes
- Debugger
- UDP / Datagram Sockets
- Class: dgram.Socket
- Event: 'close'
- Event: 'error'
- Event: 'listening'
- Event: 'message'
- socket.addMembership(multicastAddress[, multicastInterface])
- socket.address()
- socket.bind([port][, address][, callback])
- socket.bind(options[, callback])
- socket.close([callback])
- socket.dropMembership(multicastAddress[, multicastInterface])
- socket.send(msg, [offset, length,] port, address[, callback])
- socket.setBroadcast(flag)
- socket.setMulticastLoopback(flag)
- socket.setMulticastTTL(ttl)
- socket.setTTL(ttl)
- socket.ref()
- socket.unref()
- Change to asynchronous
socket.bind()
behavior
dgram
module functions
- Class: dgram.Socket
- DNS
- dns.getServers()
- dns.lookup(hostname[, options], callback)
- dns.lookupService(address, port, callback)
- dns.resolve(hostname[, rrtype], callback)
- dns.resolve4(hostname, callback)
- dns.resolve6(hostname, callback)
- dns.resolveCname(hostname, callback)
- dns.resolveMx(hostname, callback)
- dns.resolveNs(hostname, callback)
- dns.resolveSoa(hostname, callback)
- dns.resolveSrv(hostname, callback)
- dns.resolveTxt(hostname, callback)
- dns.reverse(ip, callback)
- dns.setServers(servers)
- Error codes
- Implementation considerations
- Domain
- Errors
- Events
- Passing arguments and
this
to listeners - Asynchronous vs. Synchronous
- Handling events only once
- Error events
- Class: EventEmitter
- Event: 'newListener'
- Event: 'removeListener'
- EventEmitter.listenerCount(emitter, eventName)
- EventEmitter.defaultMaxListeners
- emitter.addListener(eventName, listener)
- emitter.emit(eventName[, arg1][, arg2][, ...])
- emitter.getMaxListeners()
- emitter.listenerCount(eventName)
- emitter.listeners(eventName)
- emitter.on(eventName, listener)
- emitter.once(eventName, listener)
- emitter.removeAllListeners([eventName])
- emitter.removeListener(eventName, listener)
- emitter.setMaxListeners(n)
- Passing arguments and
- File System
- Buffer API
- Class: fs.FSWatcher
- Class: fs.ReadStream
- Class: fs.Stats
- Class: fs.WriteStream
- fs.access(path[, mode], callback)
- fs.accessSync(path[, mode])
- fs.appendFile(file, data[, options], callback)
- fs.appendFileSync(file, data[, options])
- fs.chmod(path, mode, callback)
- fs.chmodSync(path, mode)
- fs.chown(path, uid, gid, callback)
- fs.chownSync(path, uid, gid)
- fs.close(fd, callback)
- fs.closeSync(fd)
- fs.createReadStream(path[, options])
- fs.createWriteStream(path[, options])
- fs.exists(path, callback)
- fs.existsSync(path)
- fs.fchmod(fd, mode, callback)
- fs.fchmodSync(fd, mode)
- fs.fchown(fd, uid, gid, callback)
- fs.fchownSync(fd, uid, gid)
- fs.fdatasync(fd, callback)
- fs.fdatasyncSync(fd)
- fs.fstat(fd, callback)
- fs.fstatSync(fd)
- fs.fsync(fd, callback)
- fs.fsyncSync(fd)
- fs.ftruncate(fd, len, callback)
- fs.ftruncateSync(fd, len)
- fs.futimes(fd, atime, mtime, callback)
- fs.futimesSync(fd, atime, mtime)
- fs.lchmod(path, mode, callback)
- fs.lchmodSync(path, mode)
- fs.lchown(path, uid, gid, callback)
- fs.lchownSync(path, uid, gid)
- fs.link(srcpath, dstpath, callback)
- fs.linkSync(srcpath, dstpath)
- fs.lstat(path, callback)
- fs.lstatSync(path)
- fs.mkdir(path[, mode], callback)
- fs.mkdirSync(path[, mode])
- fs.mkdtemp(prefix, callback)
- fs.mkdtempSync(template)
- fs.open(path, flags[, mode], callback)
- fs.openSync(path, flags[, mode])
- fs.read(fd, buffer, offset, length, position, callback)
- fs.readdir(path, callback)
- fs.readdirSync(path)
- fs.readFile(file[, options], callback)
- fs.readFileSync(file[, options])
- fs.readlink(path, callback)
- fs.readlinkSync(path)
- fs.realpath(path[, cache], callback)
- fs.readSync(fd, buffer, offset, length, position)
- fs.realpathSync(path[, cache])
- fs.rename(oldPath, newPath, callback)
- fs.renameSync(oldPath, newPath)
- fs.rmdir(path, callback)
- fs.rmdirSync(path)
- fs.stat(path, callback)
- fs.statSync(path)
- fs.symlink(target, path[, type], callback)
- fs.symlinkSync(target, path[, type])
- fs.truncate(path, len, callback)
- fs.truncateSync(path, len)
- fs.unlink(path, callback)
- fs.unlinkSync(path)
- fs.unwatchFile(filename[, listener])
- fs.utimes(path, atime, mtime, callback)
- fs.utimesSync(path, atime, mtime)
- fs.watch(filename[, options][, listener])
- fs.watchFile(filename[, options], listener)
- fs.write(fd, buffer, offset, length[, position], callback)
- fs.write(fd, data[, position[, encoding]], callback)
- fs.writeFile(file, data[, options], callback)
- fs.writeFileSync(file, data[, options])
- fs.writeSync(fd, buffer, offset, length[, position])
- fs.writeSync(fd, data[, position[, encoding]])
- Global Objects
- HTTP
- Class: http.Agent
- Class: http.ClientRequest
- Event: 'abort'
- Event: 'checkExpectation'
- Event: 'connect'
- Event: 'continue'
- Event: 'response'
- Event: 'socket'
- Event: 'upgrade'
- request.abort()
- request.end([data][, encoding][, callback])
- request.flushHeaders()
- request.setNoDelay([noDelay])
- request.setSocketKeepAlive([enable][, initialDelay])
- request.setTimeout(timeout[, callback])
- request.write(chunk[, encoding][, callback])
- Class: http.Server
- Event: 'checkContinue'
- Event: 'clientError'
- Event: 'close'
- Event: 'connect'
- Event: 'connection'
- Event: 'request'
- Event: 'upgrade'
- server.close([callback])
- server.listen(handle[, callback])
- server.listen(path[, callback])
- server.listen(port[, hostname][, backlog][, callback])
- server.listening
- server.maxHeadersCount
- server.setTimeout(msecs, callback)
- server.timeout
- Class: http.ServerResponse
- Event: 'close'
- Event: 'finish'
- response.addTrailers(headers)
- response.end([data][, encoding][, callback])
- response.finished
- response.getHeader(name)
- response.headersSent
- response.removeHeader(name)
- response.sendDate
- response.setHeader(name, value)
- response.setTimeout(msecs, callback)
- response.statusCode
- response.statusMessage
- response.write(chunk[, encoding][, callback])
- response.writeContinue()
- response.writeHead(statusCode[, statusMessage][, headers])
- Class: http.IncomingMessage
- http.METHODS
- http.STATUS_CODES
- http.createClient([port][, host])
- http.createServer([requestListener])
- http.get(options[, callback])
- http.globalAgent
- http.request(options[, callback])
- HTTPS
- Modules
- net
- Class: net.Server
- Event: 'close'
- Event: 'connection'
- Event: 'error'
- Event: 'listening'
- server.address()
- server.close([callback])
- server.connections
- server.getConnections(callback)
- server.listen(handle[, backlog][, callback])
- server.listen(options[, callback])
- server.listen(path[, backlog][, callback])
- server.listen(port[, hostname][, backlog][, callback])
- server.listening
- server.maxConnections
- server.ref()
- server.unref()
- Class: net.Socket
- new net.Socket([options])
- Event: 'close'
- Event: 'connect'
- Event: 'data'
- Event: 'drain'
- Event: 'end'
- Event: 'error'
- Event: 'lookup'
- Event: 'timeout'
- socket.address()
- socket.bufferSize
- socket.bytesRead
- socket.bytesWritten
- socket.connect(options[, connectListener])
- socket.connect(path[, connectListener])
- socket.connect(port[, host][, connectListener])
- socket.destroy()
- socket.end([data][, encoding])
- socket.localAddress
- socket.localPort
- socket.pause()
- socket.ref()
- socket.remoteAddress
- socket.remoteFamily
- socket.remotePort
- socket.resume()
- socket.setEncoding([encoding])
- socket.setKeepAlive([enable][, initialDelay])
- socket.setNoDelay([noDelay])
- socket.setTimeout(timeout[, callback])
- socket.unref()
- socket.write(data[, encoding][, callback])
- net.connect(options[, connectListener])
- net.connect(path[, connectListener])
- net.connect(port[, host][, connectListener])
- net.createConnection(options[, connectListener])
- net.createConnection(path[, connectListener])
- net.createConnection(port[, host][, connectListener])
- net.createServer([options][, connectionListener])
- net.isIP(input)
- net.isIPv4(input)
- net.isIPv6(input)
- Class: net.Server
- OS
- Path
- process
- Event: 'beforeExit'
- Event: 'exit'
- Event: 'message'
- Event: 'rejectionHandled'
- Event: 'uncaughtException'
- Event: 'unhandledRejection'
- Exit Codes
- Signal Events
- process.abort()
- process.arch
- process.argv
- process.chdir(directory)
- process.config
- process.connected
- process.cwd()
- process.disconnect()
- process.env
- process.execArgv
- process.execPath
- process.exit([code])
- process.exitCode
- process.getegid()
- process.geteuid()
- process.getgid()
- process.getgroups()
- process.getuid()
- process.hrtime()
- process.initgroups(user, extra_group)
- process.kill(pid[, signal])
- process.mainModule
- process.memoryUsage()
- process.nextTick(callback[, arg][, ...])
- process.pid
- process.platform
- process.release
- process.send(message[, sendHandle[, options]][, callback])
- process.setegid(id)
- process.seteuid(id)
- process.setgid(id)
- process.setgroups(groups)
- process.setuid(id)
- process.stderr
- process.stdin
- process.stdout
- process.title
- process.umask([mask])
- process.uptime()
- process.version
- process.versions
- punycode
- Query String
- Readline
- REPL
- Stream
- API for Stream Consumers
- API for Stream Implementors
- Simplified Constructor API
- Streams: Under the Hood
- StringDecoder
- Timers
- TLS (SSL)
- ALPN, NPN and SNI
- Client-initiated renegotiation attack mitigation
- Modifying the Default TLS Cipher suite
- Perfect Forward Secrecy
- Class: CryptoStream
- Class: SecurePair
- Class: tls.Server
- Event: 'clientError'
- Event: 'newSession'
- Event: 'OCSPRequest'
- Event: 'resumeSession'
- Event: 'secureConnection'
- server.addContext(hostname, context)
- server.address()
- server.close([callback])
- server.connections
- server.getTicketKeys()
- server.listen(port[, hostname][, callback])
- server.setTicketKeys(keys)
- server.maxConnections
- Class: tls.TLSSocket
- new tls.TLSSocket(socket[, options])
- Event: 'OCSPResponse'
- Event: 'secureConnect'
- tlsSocket.address()
- tlsSocket.authorized
- tlsSocket.authorizationError
- tlsSocket.encrypted
- tlsSocket.getCipher()
- tlsSocket.getEphemeralKeyInfo()
- tlsSocket.getPeerCertificate([ detailed ])
- tlsSocket.getProtocol()
- tlsSocket.getSession()
- tlsSocket.getTLSTicket()
- tlsSocket.localAddress
- tlsSocket.localPort
- tlsSocket.remoteAddress
- tlsSocket.remoteFamily
- tlsSocket.remotePort
- tlsSocket.renegotiate(options, callback)
- tlsSocket.setMaxSendFragment(size)
- tls.connect(options[, callback])
- tls.connect(port[, host][, options][, callback])
- tls.createSecureContext(options)
- tls.createSecurePair([context][, isServer][, requestCert][, rejectUnauthorized][, options])
- tls.createServer(options[, secureConnectionListener])
- tls.getCiphers()
- TTY
- URL
- util
- util.debug(string)
- util.debuglog(section)
- util.deprecate(function, string)
- util.error([...])
- util.format(format[, ...])
- util.inherits(constructor, superConstructor)
- util.inspect(object[, options])
- util.isArray(object)
- util.isBoolean(object)
- util.isBuffer(object)
- util.isDate(object)
- util.isError(object)
- util.isFunction(object)
- util.isNull(object)
- util.isNullOrUndefined(object)
- util.isNumber(object)
- util.isObject(object)
- util.isPrimitive(object)
- util.isRegExp(object)
- util.isString(object)
- util.isSymbol(object)
- util.isUndefined(object)
- util.log(string)
- util.print([...])
- util.pump(readableStream, writableStream[, callback])
- util.puts([...])
- V8
- Executing JavaScript
- Zlib
- Examples
- Memory Usage Tuning
- Flushing
- Constants
- Class Options
- Class: zlib.Deflate
- Class: zlib.DeflateRaw
- Class: zlib.Gunzip
- Class: zlib.Gzip
- Class: zlib.Inflate
- Class: zlib.InflateRaw
- Class: zlib.Unzip
- Class: zlib.Zlib
- zlib.createDeflate([options])
- zlib.createDeflateRaw([options])
- zlib.createGunzip([options])
- zlib.createGzip([options])
- zlib.createInflate([options])
- zlib.createInflateRaw([options])
- zlib.createUnzip([options])
- Convenience Methods
- zlib.deflate(buf[, options], callback)
- zlib.deflateSync(buf[, options])
- zlib.deflateRaw(buf[, options], callback)
- zlib.deflateRawSync(buf[, options])
- zlib.gunzip(buf[, options], callback)
- zlib.gunzipSync(buf[, options])
- zlib.gzip(buf[, options], callback)
- zlib.gzipSync(buf[, options])
- zlib.inflate(buf[, options], callback)
- zlib.inflateSync(buf[, options])
- zlib.inflateRaw(buf[, options], callback)
- zlib.inflateRawSync(buf[, options])
- zlib.unzip(buf[, options], callback)
- zlib.unzipSync(buf[, options])
About this Documentation#
The goal of this documentation is to comprehensively explain the Node.js API, both from a reference as well as a conceptual point of view. Each section describes a built-in module or high-level concept.
Where appropriate, property types, method arguments, and the arguments provided to event handlers are detailed in a list underneath the topic heading.
Every .html
document has a corresponding .json
document presenting
the same information in a structured manner. This feature is
experimental, and added for the benefit of IDEs and other utilities that
wish to do programmatic things with the documentation.
Every .html
and .json
file is generated based on the corresponding
.md
file in the doc/api/
folder in Node.js's source tree. The
documentation is generated using the tools/doc/generate.js
program.
The HTML template is located at doc/template.html
.
If you find a error in this documentation, please submit an issue or see the contributing guide for directions on how to submit a patch.
Stability Index#
Throughout the documentation, you will see indications of a section's stability. The Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Some are so proven, and so relied upon, that they are unlikely to ever change at all. Others are brand new and experimental, or known to be hazardous and in the process of being redesigned.
The stability indices are as follows:
Stability: 0 - Deprecated This feature is known to be problematic, and changes are planned. Do not rely on it. Use of the feature may cause warnings. Backwards compatibility should not be expected.
Stability: 1 - Experimental This feature is subject to change, and is gated by a command line flag. It may change or be removed in future versions.
Stability: 2 - Stable The API has proven satisfactory. Compatibility with the npm ecosystem is a high priority, and will not be broken unless absolutely necessary.
Stability: 3 - Locked Only fixes related to security, performance, or bug fixes will be accepted. Please do not suggest API changes in this area; they will be refused.
JSON Output#
Stability: 1 - Experimental
Every HTML file in the markdown has a corresponding JSON file with the same data.
This feature was added in Node.js v0.6.12. It is experimental.
Syscalls and man pages#
System calls like open(2) and read(2) define the interface between user programs
and the underlying operating system. Node functions which simply wrap a syscall,
like fs.open()
, will document that. The docs link to the corresponding man
pages (short for manual pages) which describe how the syscalls work.
Caveat: some syscalls, like lchown(2), are BSD-specific. That means, for
example, that fs.lchown()
only works on Mac OS X and other BSD-derived systems,
and is not available on Linux.
Most Unix syscalls have Windows equivalents, but behavior may differ on Windows relative to Linux and OS X. For an example of the subtle ways in which it's sometimes impossible to replace Unix syscall semantics on Windows, see Node issue 4760.
Synopsis#
An example of a web server written with Node.js which responds with
'Hello World'
:
const http = require('http');
http.createServer( (request, response) => {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World\n');
}).listen(8124);
console.log('Server running at http://127.0.0.1:8124/');
To run the server, put the code into a file called example.js
and execute
it with the node program
$ node example.js
Server running at http://127.0.0.1:8124/
All of the examples in the documentation can be run similarly.
Addons#
Node.js Addons are dynamically-linked shared objects, written in C or C++, that
can be loaded into Node.js using the require()
function, and used
just as if they were an ordinary Node.js module. They are used primarily to
provide an interface between JavaScript running in Node.js and C/C++ libraries.
At the moment, the method for implementing Addons is rather complicated, involving knowledge of several components and APIs :
V8: the C++ library Node.js currently uses to provide the JavaScript implementation. V8 provides the mechanisms for creating objects, calling functions, etc. V8's API is documented mostly in the
v8.h
header file (deps/v8/include/v8.h
in the Node.js source tree), which is also available online.libuv: The C library that implements the Node.js event loop, its worker threads and all of the asynchronous behaviors of the platform. It also serves as a cross-platform abstraction library, giving easy, POSIX-like access across all major operating systems to many common system tasks, such as interacting with the filesystem, sockets, timers and system events. libuv also provides a pthreads-like threading abstraction that may be used to power more sophisticated asynchronous Addons that need to move beyond the standard event loop. Addon authors are encouraged to think about how to avoid blocking the event loop with I/O or other time-intensive tasks by off-loading work via libuv to non-blocking system operations, worker threads or a custom use of libuv's threads.
Internal Node.js libraries. Node.js itself exports a number of C/C++ APIs that Addons can use — the most important of which is the
node::ObjectWrap
class.Node.js includes a number of other statically linked libraries including OpenSSL. These other libraries are located in the
deps/
directory in the Node.js source tree. Only the V8 and OpenSSL symbols are purposefully re-exported by Node.js and may be used to various extents by Addons. See Linking to Node.js' own dependencies for additional information.
All of the following examples are available for download and may be used as a starting-point for your own Addon.
Hello world#
This "Hello world" example is a simple Addon, written in C++, that is the equivalent of the following JavaScript code:
module.exports.hello = () => 'world';
First, create the file hello.cc
:
// hello.cc
#include <node.h>
namespace demo {
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
void Method(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
args.GetReturnValue().Set(String::NewFromUtf8(isolate, "world"));
}
void init(Local<Object> exports) {
NODE_SET_METHOD(exports, "hello", Method);
}
NODE_MODULE(addon, init)
} // namespace demo
Note that all Node.js Addons must export an initialization function following the pattern:
void Initialize(Local<Object> exports);
NODE_MODULE(module_name, Initialize)
There is no semi-colon after NODE_MODULE
as it's not a function (see
node.h
).
The module_name
must match the filename of the final binary (excluding
the .node suffix).
In the hello.cc
example, then, the initialization function is init
and the
Addon module name is addon
.
Building#
Once the source code has been written, it must be compiled into the binary
addon.node
file. To do so, create a file called binding.gyp
in the
top-level of the project describing the build configuration of your module
using a JSON-like format. This file is used by node-gyp -- a tool written
specifically to compile Node.js Addons.
{
"targets": [
{
"target_name": "addon",
"sources": [ "hello.cc" ]
}
]
}
Note: A version of the node-gyp
utility is bundled and distributed with
Node.js as part of npm
. This version is not made directly available for
developers to use and is intended only to support the ability to use the
npm install
command to compile and install Addons. Developers who wish to
use node-gyp
directly can install it using the command
npm install -g node-gyp
. See the node-gyp
installation instructions for
more information, including platform-specific requirements.
Once the binding.gyp
file has been created, use node-gyp configure
to
generate the appropriate project build files for the current platform. This
will generate either a Makefile
(on Unix platforms) or a vcxproj
file
(on Windows) in the build/
directory.
Next, invoke the node-gyp build
command to generate the compiled addon.node
file. This will be put into the build/Release/
directory.
When using npm install
to install a Node.js Addon, npm uses its own bundled
version of node-gyp
to perform this same set of actions, generating a
compiled version of the Addon for the user's platform on demand.
Once built, the binary Addon can be used from within Node.js by pointing
require()
to the built addon.node
module:
// hello.js
const addon = require('./build/Release/addon');
console.log(addon.hello()); // 'world'
Please see the examples below for further information or
https://github.com/arturadib/node-qt for an example in production.
Because the exact path to the compiled Addon binary can vary depending on how
it is compiled (i.e. sometimes it may be in ./build/Debug/
), Addons can use
the bindings package to load the compiled module.
Note that while the bindings
package implementation is more sophisticated
in how it locates Addon modules, it is essentially using a try-catch pattern
similar to:
try {
return require('./build/Release/addon.node');
} catch (err) {
return require('./build/Debug/addon.node');
}
Linking to Node.js' own dependencies#
Node.js uses a number of statically linked libraries such as V8, libuv and
OpenSSL. All Addons are required to link to V8 and may link to any of the
other dependencies as well. Typically, this is as simple as including
the appropriate #include <...>
statements (e.g. #include <v8.h>
) and
node-gyp
will locate the appropriate headers automatically. However, there
are a few caveats to be aware of:
When
node-gyp
runs, it will detect the specific release version of Node.js and download either the full source tarball or just the headers. If the full source is downloaded, Addons will have complete access to the full set of Node.js dependencies. However, if only the Node.js headers are downloaded, then only the symbols exported by Node.js will be available.node-gyp
can be run using the--nodedir
flag pointing at a local Node.js source image. Using this option, the Addon will have access to the full set of dependencies.
Loading Addons using require()#
The filename extension of the compiled Addon binary is .node
(as opposed
to .dll
or .so
). The require()
function is written to look for
files with the .node
file extension and initialize those as dynamically-linked
libraries.
When calling require()
, the .node
extension can usually be
omitted and Node.js will still find and initialize the Addon. One caveat,
however, is that Node.js will first attempt to locate and load modules or
JavaScript files that happen to share the same base name. For instance, if
there is a file addon.js
in the same directory as the binary addon.node
,
then require('addon')
will give precedence to the addon.js
file
and load it instead.
Native Abstractions for Node.js#
Each of the examples illustrated in this document make direct use of the Node.js and V8 APIs for implementing Addons. It is important to understand that the V8 API can, and has, changed dramatically from one V8 release to the next (and one major Node.js release to the next). With each change, Addons may need to be updated and recompiled in order to continue functioning. The Node.js release schedule is designed to minimize the frequency and impact of such changes but there is little that Node.js can do currently to ensure stability of the V8 APIs.
The Native Abstractions for Node.js (or nan
) provide a set of tools that
Addon developers are recommended to use to keep compatibility between past and
future releases of V8 and Node.js. See the nan
examples for an
illustration of how it can be used.
Addon examples#
Following are some example Addons intended to help developers get started. The examples make use of the V8 APIs. Refer to the online V8 reference for help with the various V8 calls, and V8's Embedder's Guide for an explanation of several concepts used such as handles, scopes, function templates, etc.
Each of these examples using the following binding.gyp
file:
{
"targets": [
{
"target_name": "addon",
"sources": [ "addon.cc" ]
}
]
}
In cases where there is more than one .cc
file, simply add the additional
filename to the sources
array. For example:
"sources": ["addon.cc", "myexample.cc"]
Once the binding.gyp
file is ready, the example Addons can be configured and
built using node-gyp
:
$ node-gyp configure build
Function arguments#
Addons will typically expose objects and functions that can be accessed from JavaScript running within Node.js. When functions are invoked from JavaScript, the input arguments and return value must be mapped to and from the C/C++ code.
The following example illustrates how to read function arguments passed from JavaScript and how to return a result:
// addon.cc
#include <node.h>
namespace demo {
using v8::Exception;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::String;
using v8::Value;
// This is the implementation of the "add" method
// Input arguments are passed using the
// const FunctionCallbackInfo<Value>& args struct
void Add(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
// Check the number of arguments passed.
if (args.Length() < 2) {
// Throw an Error that is passed back to JavaScript
isolate->ThrowException(Exception::TypeError(
String::NewFromUtf8(isolate, "Wrong number of arguments")));
return;
}
// Check the argument types
if (!args[0]->IsNumber() || !args[1]->IsNumber()) {
isolate->ThrowException(Exception::TypeError(
String::NewFromUtf8(isolate, "Wrong arguments")));
return;
}
// Perform the operation
double value = args[0]->NumberValue() + args[1]->NumberValue();
Local<Number> num = Number::New(isolate, value);
// Set the return value (using the passed in
// FunctionCallbackInfo<Value>&)
args.GetReturnValue().Set(num);
}
void Init(Local<Object> exports) {
NODE_SET_METHOD(exports, "add", Add);
}
NODE_MODULE(addon, Init)
} // namespace demo
Once compiled, the example Addon can be required and used from within Node.js:
// test.js
const addon = require('./build/Release/addon');
console.log('This should be eight:', addon.add(3, 5));
Callbacks#
It is common practice within Addons to pass JavaScript functions to a C++ function and execute them from there. The following example illustrates how to invoke such callbacks:
// addon.cc
#include <node.h>
namespace demo {
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Null;
using v8::Object;
using v8::String;
using v8::Value;
void RunCallback(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<Function> cb = Local<Function>::Cast(args[0]);
const unsigned argc = 1;
Local<Value> argv[argc] = { String::NewFromUtf8(isolate, "hello world") };
cb->Call(Null(isolate), argc, argv);
}
void Init(Local<Object> exports, Local<Object> module) {
NODE_SET_METHOD(module, "exports", RunCallback);
}
NODE_MODULE(addon, Init)
} // namespace demo
Note that this example uses a two-argument form of Init()
that receives
the full module
object as the second argument. This allows the Addon
to completely overwrite exports
with a single function instead of
adding the function as a property of exports
.
To test it, run the following JavaScript:
// test.js
const addon = require('./build/Release/addon');
addon((msg) => {
console.log(msg); // 'hello world'
});
Note that, in this example, the callback function is invoked synchronously.
Object factory#
Addons can create and return new objects from within a C++ function as
illustrated in the following example. An object is created and returned with a
property msg
that echoes the string passed to createObject()
:
// addon.cc
#include <node.h>
namespace demo {
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
void CreateObject(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<Object> obj = Object::New(isolate);
obj->Set(String::NewFromUtf8(isolate, "msg"), args[0]->ToString());
args.GetReturnValue().Set(obj);
}
void Init(Local<Object> exports, Local<Object> module) {
NODE_SET_METHOD(module, "exports", CreateObject);
}
NODE_MODULE(addon, Init)
} // namespace demo
To test it in JavaScript:
// test.js
const addon = require('./build/Release/addon');
var obj1 = addon('hello');
var obj2 = addon('world');
console.log(obj1.msg + ' ' + obj2.msg); // 'hello world'
Function factory#
Another common scenario is creating JavaScript functions that wrap C++ functions and returning those back to JavaScript:
// addon.cc
#include <node.h>
namespace demo {
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
void MyFunction(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
args.GetReturnValue().Set(String::NewFromUtf8(isolate, "hello world"));
}
void CreateFunction(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, MyFunction);
Local<Function> fn = tpl->GetFunction();
// omit this to make it anonymous
fn->SetName(String::NewFromUtf8(isolate, "theFunction"));
args.GetReturnValue().Set(fn);
}
void Init(Local<Object> exports, Local<Object> module) {
NODE_SET_METHOD(module, "exports", CreateFunction);
}
NODE_MODULE(addon, Init)
} // namespace demo
To test:
// test.js
const addon = require('./build/Release/addon');
var fn = addon();
console.log(fn()); // 'hello world'
Wrapping C++ objects#
It is also possible to wrap C++ objects/classes in a way that allows new
instances to be created using the JavaScript new
operator:
// addon.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using v8::Local;
using v8::Object;
void InitAll(Local<Object> exports) {
MyObject::Init(exports);
}
NODE_MODULE(addon, InitAll)
} // namespace demo
Then, in myobject.h
, the wrapper class inherits from node::ObjectWrap
:
// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H
#include <node.h>
#include <node_object_wrap.h>
namespace demo {
class MyObject : public node::ObjectWrap {
public:
static void Init(v8::Local<v8::Object> exports);
private:
explicit MyObject(double value = 0);
~MyObject();
static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
static void PlusOne(const v8::FunctionCallbackInfo<v8::Value>& args);
static v8::Persistent<v8::Function> constructor;
double value_;
};
} // namespace demo
#endif
In myobject.cc
, implement the various methods that are to be exposed.
Below, the method plusOne()
is exposed by adding it to the constructor's
prototype:
// myobject.cc
#include "myobject.h"
namespace demo {
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::Persistent;
using v8::String;
using v8::Value;
Persistent<Function> MyObject::constructor;
MyObject::MyObject(double value) : value_(value) {
}
MyObject::~MyObject() {
}
void MyObject::Init(Local<Object> exports) {
Isolate* isolate = exports->GetIsolate();
// Prepare constructor template
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
tpl->InstanceTemplate()->SetInternalFieldCount(1);
// Prototype
NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);
constructor.Reset(isolate, tpl->GetFunction());
exports->Set(String::NewFromUtf8(isolate, "MyObject"),
tpl->GetFunction());
}
void MyObject::New(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
if (args.IsConstructCall()) {
// Invoked as constructor: `new MyObject(...)`
double value = args[0]->IsUndefined() ? 0 : args[0]->NumberValue();
MyObject* obj = new MyObject(value);
obj->Wrap(args.This());
args.GetReturnValue().Set(args.This());
} else {
// Invoked as plain function `MyObject(...)`, turn into construct call.
const int argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Function> cons = Local<Function>::New(isolate, constructor);
args.GetReturnValue().Set(cons->NewInstance(argc, argv));
}
}
void MyObject::PlusOne(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.Holder());
obj->value_ += 1;
args.GetReturnValue().Set(Number::New(isolate, obj->value_));
}
} // namespace demo
To build this example, the myobject.cc
file must be added to the
binding.gyp
:
{
"targets": [
{
"target_name": "addon",
"sources": [
"addon.cc",
"myobject.cc"
]
}
]
}
Test it with:
// test.js
const addon = require('./build/Release/addon');
var obj = new addon.MyObject(10);
console.log(obj.plusOne()); // 11
console.log(obj.plusOne()); // 12
console.log(obj.plusOne()); // 13
Factory of wrapped objects#
Alternatively, it is possible to use a factory pattern to avoid explicitly
creating object instances using the JavaScript new
operator:
var obj = addon.createObject();
// instead of:
// var obj = new addon.Object();
First, the createObject()
method is implemented in addon.cc
:
// addon.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
void CreateObject(const FunctionCallbackInfo<Value>& args) {
MyObject::NewInstance(args);
}
void InitAll(Local<Object> exports, Local<Object> module) {
MyObject::Init(exports->GetIsolate());
NODE_SET_METHOD(module, "exports", CreateObject);
}
NODE_MODULE(addon, InitAll)
} // namespace demo
In myobject.h
, the static method NewInstance()
is added to handle
instantiating the object. This method takes the place of using new
in
JavaScript:
// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H
#include <node.h>
#include <node_object_wrap.h>
namespace demo {
class MyObject : public node::ObjectWrap {
public:
static void Init(v8::Isolate* isolate);
static void NewInstance(const v8::FunctionCallbackInfo<v8::Value>& args);
private:
explicit MyObject(double value = 0);
~MyObject();
static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
static void PlusOne(const v8::FunctionCallbackInfo<v8::Value>& args);
static v8::Persistent<v8::Function> constructor;
double value_;
};
} // namespace demo
#endif
The implementation in myobject.cc
is similar to the previous example:
// myobject.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::Persistent;
using v8::String;
using v8::Value;
Persistent<Function> MyObject::constructor;
MyObject::MyObject(double value) : value_(value) {
}
MyObject::~MyObject() {
}
void MyObject::Init(Isolate* isolate) {
// Prepare constructor template
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
tpl->InstanceTemplate()->SetInternalFieldCount(1);
// Prototype
NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);
constructor.Reset(isolate, tpl->GetFunction());
}
void MyObject::New(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
if (args.IsConstructCall()) {
// Invoked as constructor: `new MyObject(...)`
double value = args[0]->IsUndefined() ? 0 : args[0]->NumberValue();
MyObject* obj = new MyObject(value);
obj->Wrap(args.This());
args.GetReturnValue().Set(args.This());
} else {
// Invoked as plain function `MyObject(...)`, turn into construct call.
const int argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Function> cons = Local<Function>::New(isolate, constructor);
args.GetReturnValue().Set(cons->NewInstance(argc, argv));
}
}
void MyObject::NewInstance(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
const unsigned argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Function> cons = Local<Function>::New(isolate, constructor);
Local<Object> instance = cons->NewInstance(argc, argv);
args.GetReturnValue().Set(instance);
}
void MyObject::PlusOne(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.Holder());
obj->value_ += 1;
args.GetReturnValue().Set(Number::New(isolate, obj->value_));
}
} // namespace demo
Once again, to build this example, the myobject.cc
file must be added to the
binding.gyp
:
{
"targets": [
{
"target_name": "addon",
"sources": [
"addon.cc",
"myobject.cc"
]
}
]
}
Test it with:
// test.js
const createObject = require('./build/Release/addon');
var obj = createObject(10);
console.log(obj.plusOne()); // 11
console.log(obj.plusOne()); // 12
console.log(obj.plusOne()); // 13
var obj2 = createObject(20);
console.log(obj2.plusOne()); // 21
console.log(obj2.plusOne()); // 22
console.log(obj2.plusOne()); // 23
Passing wrapped objects around#
In addition to wrapping and returning C++ objects, it is possible to pass
wrapped objects around by unwrapping them with the Node.js helper function
node::ObjectWrap::Unwrap
. The following examples shows a function add()
that can take two MyObject
objects as input arguments:
// addon.cc
#include <node.h>
#include <node_object_wrap.h>
#include "myobject.h"
namespace demo {
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::String;
using v8::Value;
void CreateObject(const FunctionCallbackInfo<Value>& args) {
MyObject::NewInstance(args);
}
void Add(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
MyObject* obj1 = node::ObjectWrap::Unwrap<MyObject>(
args[0]->ToObject());
MyObject* obj2 = node::ObjectWrap::Unwrap<MyObject>(
args[1]->ToObject());
double sum = obj1->value() + obj2->value();
args.GetReturnValue().Set(Number::New(isolate, sum));
}
void InitAll(Local<Object> exports) {
MyObject::Init(exports->GetIsolate());
NODE_SET_METHOD(exports, "createObject", CreateObject);
NODE_SET_METHOD(exports, "add", Add);
}
NODE_MODULE(addon, InitAll)
} // namespace demo
In myobject.h
, a new public method is added to allow access to private values
after unwrapping the object.
// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H
#include <node.h>
#include <node_object_wrap.h>
namespace demo {
class MyObject : public node::ObjectWrap {
public:
static void Init(v8::Isolate* isolate);
static void NewInstance(const v8::FunctionCallbackInfo<v8::Value>& args);
inline double value() const { return value_; }
private:
explicit MyObject(double value = 0);
~MyObject();
static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
static v8::Persistent<v8::Function> constructor;
double value_;
};
} // namespace demo
#endif
The implementation of myobject.cc
is similar to before:
// myobject.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::Persistent;
using v8::String;
using v8::Value;
Persistent<Function> MyObject::constructor;
MyObject::MyObject(double value) : value_(value) {
}
MyObject::~MyObject() {
}
void MyObject::Init(Isolate* isolate) {
// Prepare constructor template
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
tpl->InstanceTemplate()->SetInternalFieldCount(1);
constructor.Reset(isolate, tpl->GetFunction());
}
void MyObject::New(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
if (args.IsConstructCall()) {
// Invoked as constructor: `new MyObject(...)`
double value = args[0]->IsUndefined() ? 0 : args[0]->NumberValue();
MyObject* obj = new MyObject(value);
obj->Wrap(args.This());
args.GetReturnValue().Set(args.This());
} else {
// Invoked as plain function `MyObject(...)`, turn into construct call.
const int argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Function> cons = Local<Function>::New(isolate, constructor);
args.GetReturnValue().Set(cons->NewInstance(argc, argv));
}
}
void MyObject::NewInstance(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
const unsigned argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Function> cons = Local<Function>::New(isolate, constructor);
Local<Object> instance = cons->NewInstance(argc, argv);
args.GetReturnValue().Set(instance);
}
} // namespace demo
Test it with:
// test.js
const addon = require('./build/Release/addon');
var obj1 = addon.createObject(10);
var obj2 = addon.createObject(20);
var result = addon.add(obj1, obj2);
console.log(result); // 30
AtExit hooks#
An "AtExit" hook is a function that is invoked after the Node.js event loop
has ended by before the JavaScript VM is terminated and Node.js shuts down.
"AtExit" hooks are registered using the node::AtExit
API.
void AtExit(callback, args)#
callback
:void (*)(void*)
- A pointer to the function to call at exit.args
:void*
- A pointer to pass to the callback at exit.
Registers exit hooks that run after the event loop has ended but before the VM is killed.
AtExit takes two parameters: a pointer to a callback function to run at exit, and a pointer to untyped context data to be passed to that callback.
Callbacks are run in last-in first-out order.
The following addon.cc
implements AtExit:
// addon.cc
#undef NDEBUG
#include <assert.h>
#include <stdlib.h>
#include <node.h>
namespace demo {
using node::AtExit;
using v8::HandleScope;
using v8::Isolate;
using v8::Local;
using v8::Object;
static char cookie[] = "yum yum";
static int at_exit_cb1_called = 0;
static int at_exit_cb2_called = 0;
static void at_exit_cb1(void* arg) {
Isolate* isolate = static_cast<Isolate*>(arg);
HandleScope scope(isolate);
Local<Object> obj = Object::New(isolate);
assert(!obj.IsEmpty()); // assert VM is still alive
assert(obj->IsObject());
at_exit_cb1_called++;
}
static void at_exit_cb2(void* arg) {
assert(arg == static_cast<void*>(cookie));
at_exit_cb2_called++;
}
static void sanity_check(void*) {
assert(at_exit_cb1_called == 1);
assert(at_exit_cb2_called == 2);
}
void init(Local<Object> exports) {
AtExit(sanity_check);
AtExit(at_exit_cb2, cookie);
AtExit(at_exit_cb2, cookie);
AtExit(at_exit_cb1, exports->GetIsolate());
}
NODE_MODULE(addon, init);
} // namespace demo
Test in JavaScript by running:
// test.js
const addon = require('./build/Release/addon');
Assert#
Stability: 3 - Locked
The assert
module provides a simple set of assertion tests that can be used to
test invariants. The module is intended for internal use by Node.js, but can be
used in application code via require('assert')
. However, assert
is not a
testing framework, and is not intended to be used as a general purpose assertion
library.
The API for the assert
module is Locked. This means that there will be no
additions or changes to any of the methods implemented and exposed by
the module.
assert(value[, message])#
An alias of assert.ok()
.
const assert = require('assert');
assert(true); // OK
assert(1); // OK
assert(false);
// throws "AssertionError: false == true"
assert(0);
// throws "AssertionError: 0 == true"
assert(false, 'it\'s false');
// throws "AssertionError: it's false"
assert.deepEqual(actual, expected[, message])#
Tests for deep equality between the actual
and expected
parameters.
Primitive values are compared with the equal comparison operator ( ==
).
Only enumerable "own" properties are considered. The deepEqual()
implementation does not test object prototypes, attached symbols, or
non-enumerable properties. This can lead to some potentially surprising
results. For example, the following example does not throw an AssertionError
because the properties on the Error
object are non-enumerable:
// WARNING: This does not throw an AssertionError!
assert.deepEqual(Error('a'), Error('b'));
"Deep" equality means that the enumerable "own" properties of child objects are evaluated also:
const assert = require('assert');
const obj1 = {
a : {
b : 1
}
};
const obj2 = {
a : {
b : 2
}
};
const obj3 = {
a : {
b : 1
}
}
const obj4 = Object.create(obj1);
assert.deepEqual(obj1, obj1);
// OK, object is equal to itself
assert.deepEqual(obj1, obj2);
// AssertionError: { a: { b: 1 } } deepEqual { a: { b: 2 } }
// values of b are different
assert.deepEqual(obj1, obj3);
// OK, objects are equal
assert.deepEqual(obj1, obj4);
// AssertionError: { a: { b: 1 } } deepEqual {}
// Prototypes are ignored
If the values are not equal, an AssertionError
is thrown with a message
property set equal to the value of the message
parameter. If the message
parameter is undefined, a default error message is assigned.
assert.deepStrictEqual(actual, expected[, message])#
Generally identical to assert.deepEqual()
with two exceptions. First,
primitive values are compared using the strict equality operator ( ===
).
Second, object comparisons include a strict equality check of their prototypes.
const assert = require('assert');
assert.deepEqual({a:1}, {a:'1'});
// OK, because 1 == '1'
assert.deepStrictEqual({a:1}, {a:'1'});
// AssertionError: { a: 1 } deepStrictEqual { a: '1' }
// because 1 !== '1' using strict equality
If the values are not equal, an AssertionError
is thrown with a message
property set equal to the value of the message
parameter. If the message
parameter is undefined, a default error message is assigned.
assert.doesNotThrow(block[, error][, message])#
Asserts that the function block
does not throw an error. See
assert.throws()
for more details.
When assert.doesNotThrow()
is called, it will immediately call the block
function.
If an error is thrown and it is the same type as that specified by the error
parameter, then an AssertionError
is thrown. If the error is of a different
type, or if the error
parameter is undefined, the error is propagated back
to the caller.
The following, for instance, will throw the TypeError
because there is no
matching error type in the assertion:
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
SyntaxError
);
However, the following will result in an AssertionError
with the message
'Got unwanted exception (TypeError)..':
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
TypeError
);
If an AssertionError
is thrown and a value is provided for the message
parameter, the value of message
will be appended to the AssertionError
message:
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
TypeError,
'Whoops'
);
// Throws: AssertionError: Got unwanted exception (TypeError). Whoops
assert.equal(actual, expected[, message])#
Tests shallow, coercive equality between the actual
and expected
parameters
using the equal comparison operator ( ==
).
const assert = require('assert');
assert.equal(1, 1);
// OK, 1 == 1
assert.equal(1, '1');
// OK, 1 == '1'
assert.equal(1, 2);
// AssertionError: 1 == 2
assert.equal({a: {b: 1}}, {a: {b: 1}});
//AssertionError: { a: { b: 1 } } == { a: { b: 1 } }
If the values are not equal, an AssertionError
is thrown with a message
property set equal to the value of the message
parameter. If the message
parameter is undefined, a default error message is assigned.
assert.fail(actual, expected, message, operator)#
Throws an AssertionError
. If message
is falsy, the error message is set as
the values of actual
and expected
separated by the provided operator
.
Otherwise, the error message is the value of message
.
const assert = require('assert');
assert.fail(1, 2, undefined, '>');
// AssertionError: 1 > 2
assert.fail(1, 2, 'whoops', '>');
// AssertionError: whoops
assert.ifError(value)#
Throws value
if value
is truthy. This is useful when testing the error
argument in callbacks.
const assert = require('assert');
assert.ifError(0); // OK
assert.ifError(1); // Throws 1
assert.ifError('error') // Throws 'error'
assert.ifError(new Error()); // Throws Error
assert.notDeepEqual(actual, expected[, message])#
Tests for any deep inequality. Opposite of assert.deepEqual()
.
const assert = require('assert');
const obj1 = {
a : {
b : 1
}
};
const obj2 = {
a : {
b : 2
}
};
const obj3 = {
a : {
b : 1
}
}
const obj4 = Object.create(obj1);
assert.notDeepEqual(obj1, obj1);
// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }
assert.notDeepEqual(obj1, obj2);
// OK, obj1 and obj2 are not deeply equal
assert.notDeepEqual(obj1, obj3);
// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }
assert.notDeepEqual(obj1, obj4);
// OK, obj1 and obj2 are not deeply equal
If the values are deeply equal, an AssertionError
is thrown with a message
property set equal to the value of the message
parameter. If the message
parameter is undefined, a default error message is assigned.
assert.notDeepStrictEqual(actual, expected[, message])#
Tests for deep strict inequality. Opposite of assert.deepStrictEqual()
.
const assert = require('assert');
assert.notDeepEqual({a:1}, {a:'1'});
// AssertionError: { a: 1 } notDeepEqual { a: '1' }
assert.notDeepStrictEqual({a:1}, {a:'1'});
// OK
If the values are deeply and strictly equal, an AssertionError
is thrown
with a message
property set equal to the value of the message
parameter. If
the message
parameter is undefined, a default error message is assigned.
assert.notEqual(actual, expected[, message])#
Tests shallow, coercive inequality with the not equal comparison operator
( !=
).
const assert = require('assert');
assert.notEqual(1, 2);
// OK
assert.notEqual(1, 1);
// AssertionError: 1 != 1
assert.notEqual(1, '1');
// AssertionError: 1 != '1'
If the values are equal, an AssertionError
is thrown with a message
property set equal to the value of the message
parameter. If the message
parameter is undefined, a default error message is assigned.
assert.notStrictEqual(actual, expected[, message])#
Tests strict inequality as determined by the strict not equal operator
( !==
).
const assert = require('assert');
assert.notStrictEqual(1, 2);
// OK
assert.notStrictEqual(1, 1);
// AssertionError: 1 != 1
assert.notStrictEqual(1, '1');
// OK
If the values are strictly equal, an AssertionError
is thrown with a
message
property set equal to the value of the message
parameter. If the
message
parameter is undefined, a default error message is assigned.
assert.ok(value[, message])#
Tests if value
is truthy. It is equivalent to
assert.equal(!!value, true, message)
.
If value
is not truthy, an AssertionError
is thrown with a message
property set equal to the value of the message
parameter. If the message
parameter is undefined
, a default error message is assigned.
const assert = require('assert');
assert.ok(true); // OK
assert.ok(1); // OK
assert.ok(false);
// throws "AssertionError: false == true"
assert.ok(0);
// throws "AssertionError: 0 == true"
assert.ok(false, 'it\'s false');
// throws "AssertionError: it's false"
assert.strictEqual(actual, expected[, message])#
Tests strict equality as determined by the strict equality operator ( ===
).
const assert = require('assert');
assert.strictEqual(1, 2);
// AssertionError: 1 === 2
assert.strictEqual(1, 1);
// OK
assert.strictEqual(1, '1');
// AssertionError: 1 === '1'
If the values are not strictly equal, an AssertionError
is thrown with a
message
property set equal to the value of the message
parameter. If the
message
parameter is undefined, a default error message is assigned.
assert.throws(block[, error][, message])#
Expects the function block
to throw an error.
If specified, error
can be a constructor, RegExp
, or validation
function.
If specified, message
will be the message provided by the AssertionError
if
the block fails to throw.
Validate instanceof using constructor:
assert.throws(
() => {
throw new Error('Wrong value');
},
Error
);
Validate error message using RegExp
:
assert.throws(
() => {
throw new Error('Wrong value');
},
/value/
);
Custom error validation:
assert.throws(
() => {
throw new Error('Wrong value');
},
function(err) {
if ( (err instanceof Error) && /value/.test(err) ) {
return true;
}
},
'unexpected error'
);
Note that error
can not be a string. If a string is provided as the second
argument, then error
is assumed to be omitted and the string will be used for
message
instead. This can lead to easy-to-miss mistakes:
// THIS IS A MISTAKE! DO NOT DO THIS!
assert.throws(myFunction, 'missing foo', 'did not throw with expected message');
// Do this instead.
assert.throws(myFunction, /missing foo/, 'did not throw with expected message');
Buffer#
Stability: 2 - Stable
Prior to the introduction of TypedArray
in ECMAScript 2015 (ES6), the
JavaScript language had no mechanism for reading or manipulating streams
of binary data. The Buffer
class was introduced as part of the Node.js
API to make it possible to interact with octet streams in the context of things
like TCP streams and file system operations.
Now that TypedArray
has been added in ES6, the Buffer
class implements the
Uint8Array
API in a manner that is more optimized and suitable for Node.js'
use cases.
Instances of the Buffer
class are similar to arrays of integers but
correspond to fixed-sized, raw memory allocations outside the V8 heap.
The size of the Buffer
is established when it is created and cannot be
resized.
The Buffer
class is a global within Node.js, making it unlikely that one
would need to ever use require('buffer')
.
const buf1 = Buffer.alloc(10);
// Creates a zero-filled Buffer of length 10.
const buf2 = Buffer.alloc(10, 1);
// Creates a Buffer of length 10, filled with 0x01.
const buf3 = Buffer.allocUnsafe(10);
// Creates an uninitialized buffer of length 10.
// This is faster than calling Buffer.alloc() but the returned
// Buffer instance might contain old data that needs to be
// overwritten using either fill() or write().
const buf4 = Buffer.from([1,2,3]);
// Creates a Buffer containing [01, 02, 03].
const buf5 = Buffer.from('test');
// Creates a Buffer containing ASCII bytes [74, 65, 73, 74].
const buf6 = Buffer.from('tést', 'utf8');
// Creates a Buffer containing UTF8 bytes [74, c3, a9, 73, 74].
Buffer.from()
, Buffer.alloc()
, and Buffer.allocUnsafe()
#
Historically, Buffer
instances have been created using the Buffer
constructor function, which allocates the returned Buffer
differently based on what arguments are provided:
- Passing a number as the first argument to
Buffer()
(e.g.new Buffer(10)
), allocates a newBuffer
object of the specified size. The memory allocated for suchBuffer
instances is not initialized and can contain sensitive data. SuchBuffer
objects must be initialized manually by using eitherbuf.fill(0)
or by writing to theBuffer
completely. While this behavior is intentional to improve performance, development experience has demonstrated that a more explicit distinction is required between creating a fast-but-uninitializedBuffer
versus creating a slower-but-saferBuffer
. - Passing a string, array, or
Buffer
as the first argument copies the passed object's data into theBuffer
. - Passing an
ArrayBuffer
returns aBuffer
that shares allocated memory with the givenArrayBuffer
.
Because the behavior of new Buffer()
changes significantly based on the type
of value passed as the first argument, applications that do not properly
validate the input arguments passed to new Buffer()
, or that fail to
appropriately initialize newly allocated Buffer
content, can inadvertently
introduce security and reliability issues into their code.
To make the creation of Buffer
objects more reliable and less error prone,
new Buffer.from()
, Buffer.alloc()
, and Buffer.allocUnsafe()
methods have
been introduced as an alternative means of creating Buffer
instances.
Developers should migrate all existing uses of the new Buffer()
constructors
to one of these new APIs.
Buffer.from(array)
returns a newBuffer
containing a copy of the provided octets.Buffer.from(arrayBuffer[, byteOffset [, length]])
returns a newBuffer
that shares the same allocated memory as the givenArrayBuffer
.Buffer.from(buffer)
returns a newBuffer
containing a copy of the contents of the givenBuffer
.Buffer.from(str[, encoding])
returns a newBuffer
containing a copy of the provided string.Buffer.alloc(size[, fill[, encoding]])
returns a "filled"Buffer
instance of the specified size. This method can be significantly slower thanBuffer.allocUnsafe(size)
but ensures that newly createdBuffer
instances never contain old and potentially sensitive data.Buffer.allocUnsafe(size)
returns a newBuffer
of the specifiedsize
whose content must be initialized using eitherbuf.fill(0)
or written to completely.
Buffer
instances returned by Buffer.allocUnsafe(size)
may be allocated
off a shared internal memory pool if the size
is less than or equal to half
Buffer.poolSize
.
What makes Buffer.allocUnsafe(size)
"unsafe"?#
When calling Buffer.allocUnsafe()
, the segment of allocated memory is
uninitialized (it is not zeroed-out). While this design makes the allocation
of memory quite fast, the allocated segment of memory might contain old data
that is potentially sensitive. Using a Buffer
created by
Buffer.allocUnsafe(size)
without completely overwriting the memory can
allow this old data to be leaked when the Buffer
memory is read.
While there are clear performance advantages to using Buffer.allocUnsafe()
,
extra care must be taken in order to avoid introducing security
vulnerabilities into an application.
Buffers and Character Encodings#
Buffers are commonly used to represent sequences of encoded characters such as UTF8, UCS2, Base64 or even Hex-encoded data. It is possible to convert back and forth between Buffers and ordinary JavaScript string objects by using an explicit encoding method.
const buf = Buffer.from('hello world', 'ascii');
console.log(buf.toString('hex'));
// prints: 68656c6c6f20776f726c64
console.log(buf.toString('base64'));
// prints: aGVsbG8gd29ybGQ=
The character encodings currently supported by Node.js include:
'ascii'
- for 7-bit ASCII data only. This encoding method is very fast and will strip the high bit if set.'utf8'
- Multibyte encoded Unicode characters. Many web pages and other document formats use UTF-8.'utf16le'
- 2 or 4 bytes, little-endian encoded Unicode characters. Surrogate pairs (U+10000 to U+10FFFF) are supported.'ucs2'
- Alias of'utf16le'
.'base64'
- Base64 string encoding. When creating a buffer from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC 4648, Section 5.'binary'
- A way of encoding the buffer into a one-byte (latin-1
) encoded string. The string'latin-1'
is not supported. Instead, pass'binary'
to use'latin-1'
encoding.'hex'
- Encode each byte as two hexadecimal characters.
Buffers and TypedArray#
Buffers are also Uint8Array
TypedArray instances. However, there are subtle
incompatibilities with the TypedArray specification in ECMAScript 2015. For
instance, while ArrayBuffer#slice()
creates a copy of the slice,
the implementation of Buffer#slice()
creates a view over the
existing Buffer without copying, making Buffer#slice()
far more efficient.
It is also possible to create new TypedArray instances from a Buffer
with the
following caveats:
The
Buffer
object's memory is copied to the TypedArray, not shared.The
Buffer
object's memory is interpreted as an array of distinct elements, and not as a byte array of the target type. That is,new Uint32Array(Buffer.from([1,2,3,4]))
creates a 4-elementUint32Array
with elements[1,2,3,4]
, not aUint32Array
with a single element[0x1020304]
or[0x4030201]
.
It is possible to create a new Buffer
that shares the same allocated memory as
a TypedArray instance by using the TypeArray object's .buffer
property:
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
const buf1 = Buffer.from(arr); // copies the buffer
const buf2 = Buffer.from(arr.buffer); // shares the memory with arr;
console.log(buf1);
// Prints: <Buffer 88 a0>, copied buffer has only two elements
console.log(buf2);
// Prints: <Buffer 88 13 a0 0f>
arr[1] = 6000;
console.log(buf1);
// Prints: <Buffer 88 a0>
console.log(buf2);
// Prints: <Buffer 88 13 70 17>
Note that when creating a Buffer
using the TypedArray's .buffer
, it is
possible to use only a portion of the underlying ArrayBuffer
by passing in
byteOffset
and length
parameters:
const arr = new Uint16Array(20);
const buf = Buffer.from(arr.buffer, 0, 16);
console.log(buf.length);
// Prints: 16
The Buffer.from()
and TypedArray.from()
(e.g.Uint8Array.from()
) have
different signatures and implementations. Specifically, the TypedArray variants
accept a second argument that is a mapping function that is invoked on every
element of the typed array:
TypedArray.from(source[, mapFn[, thisArg]])
The Buffer.from()
method, however, does not support the use of a mapping
function:
Buffer.from(array)
Buffer.from(buffer)
Buffer.from(arrayBuffer[, byteOffset [, length]])
Buffer.from(str[, encoding])
Buffers and ES6 iteration#
Buffers can be iterated over using the ECMAScript 2015 (ES6) for..of
syntax:
const buf = Buffer(.from[1, 2, 3]);
for (var b of buf)
console.log(b)
// Prints:
// 1
// 2
// 3
Additionally, the buf.values()
, buf.keys()
, and
buf.entries()
methods can be used to create iterators.
The --zero-fill-buffers
command line option#
Node.js can be started using the --zero-fill-buffers
command line option to
force all newly allocated Buffer
and SlowBuffer
instances created using
either new Buffer(size)
and new SlowBuffer(size)
to be automatically
zero-filled upon creation. Use of this flag changes the default behavior of
these methods and can have a significant impact on performance. Use of the
--zero-fill-buffers
option is recommended only when absolutely necessary to
enforce that newly allocated Buffer
instances cannot contain potentially
sensitive data.
$ node --zero-fill-buffers
> Buffer(5);
<Buffer 00 00 00 00 00>
Class: Buffer#
The Buffer class is a global type for dealing with binary data directly. It can be constructed in a variety of ways.
new Buffer(array)#
array
<Array>
Allocates a new Buffer using an array
of octets.
const buf = new Buffer([0x62,0x75,0x66,0x66,0x65,0x72]);
// creates a new Buffer containing ASCII bytes
// ['b','u','f','f','e','r']
new Buffer(buffer)#
buffer
<Buffer>
Copies the passed buffer
data onto a new Buffer
instance.
const buf1 = new Buffer('buffer');
const buf2 = new Buffer(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// 'auffer'
console.log(buf2.toString());
// 'buffer' (copy is not changed)
new Buffer(arrayBuffer[, byteOffset[, length]])#
When passed a reference to the .buffer
property of a TypedArray
instance,
the newly created Buffer will share the same allocated memory as the
TypedArray.
The optional byteOffset
and length
arguments specify a memory range within
the arrayBuffer
that will be shared by the Buffer
.
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
const buf = new Buffer(arr.buffer); // shares the memory with arr;
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// changing the TypdArray changes the Buffer also
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>
new Buffer(size)#
size
<Number>
Allocates a new Buffer
of size
bytes. The size
must be less than
or equal to the value of require('buffer').kMaxLength
(on 64-bit
architectures, kMaxLength
is (2^31)-1
). Otherwise, a RangeError
is
thrown. If a size
less than 0 is specified, a zero-length Buffer will be
created.
Unlike ArrayBuffers
, the underlying memory for Buffer
instances created in
this way is not initialized. The contents of a newly created Buffer
are
unknown and could contain sensitive data. Use buf.fill(0)
to initialize
a Buffer to zeroes.
const buf = new Buffer(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>
new Buffer(str[, encoding])#
Creates a new Buffer containing the given JavaScript string str
. If
provided, the encoding
parameter identifies the strings character encoding.
const buf1 = new Buffer('this is a tést');
console.log(buf1.toString());
// prints: this is a tést
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = new Buffer('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
// prints: this is a tést
Class Method: Buffer.alloc(size[, fill[, encoding]])#
Allocates a new Buffer
of size
bytes. If fill
is undefined
, the
Buffer
will be zero-filled.
const buf = Buffer.alloc(5);
console.log(buf);
// <Buffer 00 00 00 00 00>
The size
must be less than or equal to the value of
require('buffer').kMaxLength
(on 64-bit architectures, kMaxLength
is
(2^31)-1
). Otherwise, a RangeError
is thrown. If a size
less than 0
is specified, a zero-length Buffer
will be created.
If fill
is specified, the allocated Buffer
will be initialized by calling
buf.fill(fill)
. See [buf.fill()
][] for more information.
const buf = Buffer.alloc(5, 'a');
console.log(buf);
// <Buffer 61 61 61 61 61>
If both fill
and encoding
are specified, the allocated Buffer
will be
initialized by calling buf.fill(fill, encoding)
. For example:
const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
// <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>
Calling Buffer.alloc(size)
can be significantly slower than the alternative
Buffer.allocUnsafe(size)
but ensures that the newly created Buffer
instance
contents will never contain sensitive data.
A TypeError
will be thrown if size
is not a number.
Class Method: Buffer.allocUnsafe(size)#
size
<Number>
Allocates a new non-zero-filled Buffer
of size
bytes. The size
must
be less than or equal to the value of require('buffer').kMaxLength
(on 64-bit
architectures, kMaxLength
is (2^31)-1
). Otherwise, a RangeError
is
thrown. If a size
less than 0 is specified, a zero-length Buffer
will be
created.
The underlying memory for Buffer
instances created in this way is not
initialized. The contents of the newly created Buffer
are unknown and
may contain sensitive data. Use buf.fill(0)
to initialize such
Buffer
instances to zeroes.
const buf = Buffer.allocUnsafe(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>
A TypeError
will be thrown if size
is not a number.
Note that the Buffer
module pre-allocates an internal Buffer
instance of
size Buffer.poolSize
that is used as a pool for the fast allocation of new
Buffer
instances created using Buffer.allocUnsafe(size)
(and the
new Buffer(size)
constructor) only when size
is less than or equal to
Buffer.poolSize >> 1
(floor of Buffer.poolSize
divided by two). The default
value of Buffer.poolSize
is 8192
but can be modified.
Use of this pre-allocated internal memory pool is a key difference between
calling Buffer.alloc(size, fill)
vs. Buffer.allocUnsafe(size).fill(fill)
.
Specifically, Buffer.alloc(size, fill)
will never use the internal Buffer
pool, while Buffer.allocUnsafe(size).fill(fill)
will use the internal
Buffer pool if size
is less than or equal to half Buffer.poolSize
. The
difference is subtle but can be important when an application requires the
additional performance that Buffer.allocUnsafe(size)
provides.
Class Method: Buffer.byteLength(string[, encoding])#
string
<String> | <Buffer> | <TypedArray> | <DataView> | <ArrayBuffer>encoding
<String> Default:'utf8'
- Return: <Number>
Returns the actual byte length of a string. This is not the same as
String.prototype.length
since that returns the number of characters in
a string.
Example:
const str = '\u00bd + \u00bc = \u00be';
console.log(`${str}: ${str.length} characters, ` +
`${Buffer.byteLength(str, 'utf8')} bytes`);
// ½ + ¼ = ¾: 9 characters, 12 bytes
When string
is a Buffer
/DataView
/TypedArray
/ArrayBuffer
,
returns the actual byte length.
Otherwise, converts to String
and returns the byte length of string.
Class Method: Buffer.compare(buf1, buf2)#
Compares buf1
to buf2
typically for the purpose of sorting arrays of
Buffers. This is equivalent is calling buf1.compare(buf2)
.
const arr = [Buffer.from('1234'), Buffer.from('0123')];
arr.sort(Buffer.compare);
Class Method: Buffer.concat(list[, totalLength])#
Returns a new Buffer which is the result of concatenating all the Buffers in
the list
together.
If the list has no items, or if the totalLength
is 0, then a new zero-length
Buffer is returned.
If totalLength
is not provided, it is calculated from the Buffers in the
list
. This, however, adds an additional loop to the function, so it is faster
to provide the length explicitly.
Example: build a single Buffer from a list of three Buffers:
const buf1 = Buffer.alloc(10, 0);
const buf2 = Buffer.alloc(14, 0);
const buf3 = Buffer.alloc(18, 0);
const totalLength = buf1.length + buf2.length + buf3.length;
console.log(totalLength);
const bufA = Buffer.concat([buf1, buf2, buf3], totalLength);
console.log(bufA);
console.log(bufA.length);
// 42
// <Buffer 00 00 00 00 ...>
// 42
Class Method: Buffer.from(array)#
array
<Array>
Allocates a new Buffer
using an array
of octets.
const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
// creates a new Buffer containing ASCII bytes
// ['b','u','f','f','e','r']
A TypeError
will be thrown if array
is not an Array
.
Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])#
arrayBuffer
<ArrayBuffer> The.buffer
property of aTypedArray
or anew ArrayBuffer()
byteOffset
<Number> Default:0
length
<Number> Default:arrayBuffer.length - byteOffset
When passed a reference to the .buffer
property of a TypedArray
instance,
the newly created Buffer
will share the same allocated memory as the
TypedArray.
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
const buf = Buffer.from(arr.buffer); // shares the memory with arr;
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// changing the TypedArray changes the Buffer also
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>
The optional byteOffset
and length
arguments specify a memory range within
the arrayBuffer
that will be shared by the Buffer
.
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
// Prints: 2
A TypeError
will be thrown if arrayBuffer
is not an ArrayBuffer
.
Class Method: Buffer.from(buffer)#
buffer
<Buffer>
Copies the passed buffer
data onto a new Buffer
instance.
const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// 'auffer'
console.log(buf2.toString());
// 'buffer' (copy is not changed)
A TypeError
will be thrown if buffer
is not a Buffer
.
Class Method: Buffer.from(str[, encoding])#
Creates a new Buffer
containing the given JavaScript string str
. If
provided, the encoding
parameter identifies the character encoding.
If not provided, encoding
defaults to 'utf8'
.
const buf1 = Buffer.from('this is a tést');
console.log(buf1.toString());
// prints: this is a tést
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
// prints: this is a tést
A TypeError
will be thrown if str
is not a string.
Class Method: Buffer.isBuffer(obj)#
Returns 'true' if obj
is a Buffer.
Class Method: Buffer.isEncoding(encoding)#
Returns true if the encoding
is a valid encoding argument, or false
otherwise.
buf[index]#
The index operator [index]
can be used to get and set the octet at position
index
in the Buffer. The values refer to individual bytes, so the legal value
range is between 0x00
and 0xFF
(hex) or 0
and 255
(decimal).
Example: copy an ASCII string into a Buffer, one byte at a time:
const str = "Node.js";
const buf = Buffer.allocUnsafe(str.length);
for (var i = 0; i < str.length ; i++) {
buf[i] = str.charCodeAt(i);
}
console.log(buf.toString('ascii'));
// Prints: Node.js
buf.compare(target[, targetStart[, targetEnd[, sourceStart[, sourceEnd]]]])#
target
<Buffer>targetStart
<Integer> The offset withintarget
at which to begin comparison. default =0
.targetEnd
<Integer> The offset withtarget
at which to end comparison. Ignored whentargetStart
isundefined
. default =target.byteLength
.sourceStart
<Integer> The offset withinbuf
at which to begin comparison. Ignored whentargetStart
isundefined
. default =0
sourceEnd
<Integer> The offset withinbuf
at which to end comparison. Ignored whentargetStart
isundefined
. default =buf.byteLength
.- Return: <Number>
Compares two Buffer instances and returns a number indicating whether buf
comes before, after, or is the same as the target
in sort order.
Comparison is based on the actual sequence of bytes in each Buffer.
0
is returned iftarget
is the same asbuf
1
is returned iftarget
should come beforebuf
when sorted.-1
is returned iftarget
should come afterbuf
when sorted.
const buf1 = Buffer.from('ABC');
const buf2 = Buffer.from('BCD');
const buf3 = Buffer.from('ABCD');
console.log(buf1.compare(buf1));
// Prints: 0
console.log(buf1.compare(buf2));
// Prints: -1
console.log(buf1.compare(buf3));
// Prints: 1
console.log(buf2.compare(buf1));
// Prints: 1
console.log(buf2.compare(buf3));
// Prints: 1
[buf1, buf2, buf3].sort(Buffer.compare);
// produces sort order [buf1, buf3, buf2]
The optional targetStart
, targetEnd
, sourceStart
, and sourceEnd
arguments can be used to limit the comparison to specific ranges within the two
Buffer
objects.
const buf1 = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8, 9]);
const buf2 = Buffer.from([5, 6, 7, 8, 9, 1, 2, 3, 4]);
console.log(buf1.compare(buf2, 5, 9, 0, 4));
// Prints: 0
console.log(buf1.compare(buf2, 0, 6, 4));
// Prints: -1
console.log(buf1.compare(buf2, 5, 6, 5));
// Prints: 1
A RangeError
will be thrown if: targetStart < 0
, sourceStart < 0
,
targetEnd > target.byteLength
or sourceEnd > source.byteLength
.
buf.copy(targetBuffer[, targetStart[, sourceStart[, sourceEnd]]])#
Copies data from a region of this Buffer to a region in the target Buffer even if the target memory region overlaps with the source.
Example: build two Buffers, then copy buf1
from byte 16 through byte 19
into buf2
, starting at the 8th byte in buf2
.
const buf1 = Buffer.allocUnsafe(26);
const buf2 = Buffer.allocUnsafe(26).fill('!');
for (var i = 0 ; i < 26 ; i++) {
buf1[i] = i + 97; // 97 is ASCII a
}
buf1.copy(buf2, 8, 16, 20);
console.log(buf2.toString('ascii', 0, 25));
// Prints: !!!!!!!!qrst!!!!!!!!!!!!!
Example: Build a single Buffer, then copy data from one region to an overlapping region in the same Buffer
const buf = Buffer.allocUnsafe(26);
for (var i = 0 ; i < 26 ; i++) {
buf[i] = i + 97; // 97 is ASCII a
}
buf.copy(buf, 0, 4, 10);
console.log(buf.toString());
// efghijghijklmnopqrstuvwxyz
buf.entries()#
- Return: <Iterator>
Creates and returns an iterator of [index, byte]
pairs from the Buffer
contents.
const buf = Buffer.from('buffer');
for (var pair of buf.entries()) {
console.log(pair);
}
// prints:
// [0, 98]
// [1, 117]
// [2, 102]
// [3, 102]
// [4, 101]
// [5, 114]
buf.equals(otherBuffer)#
Returns a boolean indicating whether this
and otherBuffer
have exactly the
same bytes.
const buf1 = Buffer.from('ABC');
const buf2 = Buffer.from('414243', 'hex');
const buf3 = Buffer.from('ABCD');
console.log(buf1.equals(buf2));
// Prints: true
console.log(buf1.equals(buf3));
// Prints: false
buf.fill(value[, offset[, end]][, encoding])#
Fills the Buffer with the specified value. If the offset
(defaults to 0
)
and end
(defaults to buf.length
) are not given the entire buffer will be
filled. The method returns a reference to the Buffer, so calls can be chained.
This is meant as a small simplification to creating a Buffer. Allowing the
creation and fill of the Buffer to be done on a single line:
const b = Buffer.alloc(50, 'h');
console.log(b.toString());
// Prints: hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
encoding
is only relevant if value
is a string. Otherwise it is ignored.
value
is coerced to a uint32
value if it is not a String or Number.
The fill()
operation writes bytes into the Buffer dumbly. If the final write
falls in between a multi-byte character then whatever bytes fit into the buffer
are written.
Buffer.alloc(3, '\u0222');
// Prints: <Buffer c8 a2 c8>
buf.indexOf(value[, byteOffset][, encoding])#
Operates similar to Array#indexOf()
in that it returns either the
starting index position of value
in Buffer or -1
if the Buffer does not
contain value
. The value
can be a String, Buffer or Number. Strings are by
default interpreted as UTF8. Buffers will use the entire Buffer (to compare a
partial Buffer use buf.slice()
). Numbers can range from 0 to 255.
const buf = Buffer.from('this is a buffer');
buf.indexOf('this');
// returns 0
buf.indexOf('is');
// returns 2
buf.indexOf(Buffer.from('a buffer'));
// returns 8
buf.indexOf(97); // ascii for 'a'
// returns 8
buf.indexOf(Buffer.from('a buffer example'));
// returns -1
buf.indexOf(Buffer.from('a buffer example').slice(0,8));
// returns 8
const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'ucs2');
utf16Buffer.indexOf('\u03a3', 0, 'ucs2');
// returns 4
utf16Buffer.indexOf('\u03a3', -4, 'ucs2');
// returns 6
buf.includes(value[, byteOffset][, encoding])#
Operates similar to Array#includes()
. The value
can be a String, Buffer
or Number. Strings are interpreted as UTF8 unless overridden with the
encoding
argument. Buffers will use the entire Buffer (to compare a partial
Buffer use buf.slice()
). Numbers can range from 0 to 255.
The byteOffset
indicates the index in buf
where searching begins.
const buf = Buffer.from('this is a buffer');
buf.includes('this');
// returns true
buf.includes('is');
// returns true
buf.includes(Buffer.from('a buffer'));
// returns true
buf.includes(97); // ascii for 'a'
// returns true
buf.includes(Buffer.from('a buffer example'));
// returns false
buf.includes(Buffer.from('a buffer example').slice(0,8));
// returns true
buf.includes('this', 4);
// returns false
buf.keys()#
- Return: <Iterator>
Creates and returns an iterator of Buffer keys (indices).
const buf = Buffer.from('buffer');
for (var key of buf.keys()) {
console.log(key);
}
// prints:
// 0
// 1
// 2
// 3
// 4
// 5
buf.length#
Returns the amount of memory allocated for the Buffer in number of bytes. Note that this does not necessarily reflect the amount of usable data within the Buffer. For instance, in the example below, a Buffer with 1234 bytes is allocated, but only 11 ASCII bytes are written.
const buf = Buffer.allocUnsafe(1234);
console.log(buf.length);
// Prints: 1234
buf.write('some string', 0, 'ascii');
console.log(buf.length);
// Prints: 1234
While the length
property is not immutable, changing the value of length
can result in undefined and inconsistent behavior. Applications that wish to
modify the length of a Buffer should therefore treat length
as read-only and
use buf.slice()
to create a new Buffer.
var buf = Buffer.allocUnsafe(10);
buf.write('abcdefghj', 0, 'ascii');
console.log(buf.length);
// Prints: 10
buf = buf.slice(0,5);
console.log(buf.length);
// Prints: 5
buf.readDoubleBE(offset[, noAssert])#
buf.readDoubleLE(offset[, noAssert])#
Reads a 64-bit double from the Buffer at the specified offset
with specified
endian format (readDoubleBE()
returns big endian, readDoubleLE()
returns
little endian).
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
const buf = Buffer.from([1,2,3,4,5,6,7,8]);
buf.readDoubleBE();
// Returns: 8.20788039913184e-304
buf.readDoubleLE();
// Returns: 5.447603722011605e-270
buf.readDoubleLE(1);
// throws RangeError: Index out of range
buf.readDoubleLE(1, true); // Warning: reads passed end of buffer!
// Segmentation fault! don't do this!
buf.readFloatBE(offset[, noAssert])#
buf.readFloatLE(offset[, noAssert])#
Reads a 32-bit float from the Buffer at the specified offset
with specified
endian format (readFloatBE()
returns big endian, readFloatLE()
returns
little endian).
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
const buf = Buffer.from([1,2,3,4]);
buf.readFloatBE();
// Returns: 2.387939260590663e-38
buf.readFloatLE();
// Returns: 1.539989614439558e-36
buf.readFloatLE(1);
// throws RangeError: Index out of range
buf.readFloatLE(1, true); // Warning: reads passed end of buffer!
// Segmentation fault! don't do this!
buf.readInt8(offset[, noAssert])#
Reads a signed 8-bit integer from the Buffer at the specified offset
.
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
Integers read from the Buffer are interpreted as two's complement signed values.
const buf = Buffer.from([1,-2,3,4]);
buf.readInt8(0);
// returns 1
buf.readInt8(1);
// returns -2
buf.readInt16BE(offset[, noAssert])#
buf.readInt16LE(offset[, noAssert])#
Reads a signed 16-bit integer from the Buffer at the specified offset
with
the specified endian format (readInt16BE()
returns big endian,
readInt16LE()
returns little endian).
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
Integers read from the Buffer are interpreted as two's complement signed values.
const buf = Buffer.from([1,-2,3,4]);
buf.readInt16BE();
// returns 510
buf.readInt16LE(1);
// returns 1022
buf.readInt32BE(offset[, noAssert])#
buf.readInt32LE(offset[, noAssert])#
Reads a signed 32-bit integer from the Buffer at the specified offset
with
the specified endian format (readInt32BE()
returns big endian,
readInt32LE()
returns little endian).
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
Integers read from the Buffer are interpreted as two's complement signed values.
const buf = Buffer.from([1,-2,3,4]);
buf.readInt32BE();
// returns 33424132
buf.readInt32LE();
// returns 67370497
buf.readInt32LE(1);
// throws RangeError: Index out of range
buf.readIntBE(offset, byteLength[, noAssert])#
buf.readIntLE(offset, byteLength[, noAssert])#
Reads byteLength
number of bytes from the Buffer at the specified offset
and interprets the result as a two's complement signed value. Supports up to 48
bits of accuracy. For example:
const buf = Buffer.allocUnsafe(6);
buf.writeUInt16LE(0x90ab, 0);
buf.writeUInt32LE(0x12345678, 2);
buf.readIntLE(0, 6).toString(16); // Specify 6 bytes (48 bits)
// Returns: '1234567890ab'
buf.readIntBE(0, 6).toString(16);
// Returns: -546f87a9cbee
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
buf.readUInt8(offset[, noAssert])#
Reads an unsigned 8-bit integer from the Buffer at the specified offset
.
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
const buf = Buffer.from([1,-2,3,4]);
buf.readUInt8(0);
// returns 1
buf.readUInt8(1);
// returns 254
buf.readUInt16BE(offset[, noAssert])#
buf.readUInt16LE(offset[, noAssert])#
Reads an unsigned 16-bit integer from the Buffer at the specified offset
with
specified endian format (readUInt16BE()
returns big endian,
readUInt16LE()
returns little endian).
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
Example:
const buf = Buffer.from([0x3, 0x4, 0x23, 0x42]);
buf.readUInt16BE(0);
// Returns: 0x0304
buf.readUInt16LE(0);
// Returns: 0x0403
buf.readUInt16BE(1);
// Returns: 0x0423
buf.readUInt16LE(1);
// Returns: 0x2304
buf.readUInt16BE(2);
// Returns: 0x2342
buf.readUInt16LE(2);
// Returns: 0x4223
buf.readUInt32BE(offset[, noAssert])#
buf.readUInt32LE(offset[, noAssert])#
Reads an unsigned 32-bit integer from the Buffer at the specified offset
with
specified endian format (readUInt32BE()
returns big endian,
readUInt32LE()
returns little endian).
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
Example:
const buf = Buffer.from([0x3, 0x4, 0x23, 0x42]);
buf.readUInt32BE(0);
// Returns: 0x03042342
console.log(buf.readUInt32LE(0));
// Returns: 0x42230403
buf.readUIntBE(offset, byteLength[, noAssert])#
buf.readUIntLE(offset, byteLength[, noAssert])#
Reads byteLength
number of bytes from the Buffer at the specified offset
and interprets the result as an unsigned integer. Supports up to 48
bits of accuracy. For example:
const buf = Buffer.allocUnsafe(6);
buf.writeUInt16LE(0x90ab, 0);
buf.writeUInt32LE(0x12345678, 2);
buf.readUIntLE(0, 6).toString(16); // Specify 6 bytes (48 bits)
// Returns: '1234567890ab'
buf.readUIntBE(0, 6).toString(16);
// Returns: ab9078563412
Setting noAssert
to true
skips validation of the offset
. This allows the
offset
to be beyond the end of the Buffer.
buf.slice([start[, end]])#
Returns a new Buffer that references the same memory as the original, but
offset and cropped by the start
and end
indices.
Note that modifying the new Buffer slice will modify the memory in the original Buffer because the allocated memory of the two objects overlap.
Example: build a Buffer with the ASCII alphabet, take a slice, then modify one byte from the original Buffer.
const buf1 = Buffer.allocUnsafe(26);
for (var i = 0 ; i < 26 ; i++) {
buf1[i] = i + 97; // 97 is ASCII a
}
const buf2 = buf1.slice(0, 3);
buf2.toString('ascii', 0, buf2.length);
// Returns: 'abc'
buf1[0] = 33;
buf2.toString('ascii', 0, buf2.length);
// Returns : '!bc'
Specifying negative indexes causes the slice to be generated relative to the end of the Buffer rather than the beginning.
const buf = Buffer.from('buffer');
buf.slice(-6, -1).toString();
// Returns 'buffe', equivalent to buf.slice(0, 5)
buf.slice(-6, -2).toString();
// Returns 'buff', equivalent to buf.slice(0, 4)
buf.slice(-5, -2).toString();
// Returns 'uff', equivalent to buf.slice(1, 4)
buf.swap16()#
- Return: <Buffer>
Interprets the Buffer
as an array of unsigned 16-bit integers and swaps
the byte-order in-place. Throws a RangeError
if the Buffer
length is
not a multiple of 16 bits. The method returns a reference to the Buffer, so
calls can be chained.
const buf = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf);
// Prints Buffer(0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8)
buf.swap16();
console.log(buf);
// Prints Buffer(0x2, 0x1, 0x4, 0x3, 0x6, 0x5, 0x8, 0x7)
buf.swap32()#
- Return: <Buffer>
Interprets the Buffer
as an array of unsigned 32-bit integers and swaps
the byte-order in-place. Throws a RangeError
if the Buffer
length is
not a multiple of 32 bits. The method returns a reference to the Buffer, so
calls can be chained.
const buf = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf);
// Prints Buffer(0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8)
buf.swap32();
console.log(buf);
// Prints Buffer(0x4, 0x3, 0x2, 0x1, 0x8, 0x7, 0x6, 0x5)
buf.toString([encoding[, start[, end]]])#
Decodes and returns a string from the Buffer data using the specified
character set encoding
.
const buf = Buffer.allocUnsafe(26);
for (var i = 0 ; i < 26 ; i++) {
buf[i] = i + 97; // 97 is ASCII a
}
buf.toString('ascii');
// Returns: 'abcdefghijklmnopqrstuvwxyz'
buf.toString('ascii',0,5);
// Returns: 'abcde'
buf.toString('utf8',0,5);
// Returns: 'abcde'
buf.toString(undefined,0,5);
// Returns: 'abcde', encoding defaults to 'utf8'
buf.toJSON()#
- Return: <Object>
Returns a JSON representation of the Buffer instance. JSON.stringify()
implicitly calls this function when stringifying a Buffer instance.
Example:
const buf = Buffer.from('test');
const json = JSON.stringify(buf);
console.log(json);
// Prints: '{"type":"Buffer","data":[116,101,115,116]}'
const copy = JSON.parse(json, (key, value) => {
return value && value.type === 'Buffer'
? Buffer.from(value.data)
: value;
});
console.log(copy.toString());
// Prints: 'test'
buf.values()#
- Return: <Iterator>
Creates and returns an iterator for Buffer values (bytes). This function is
called automatically when the Buffer is used in a for..of
statement.
const buf = Buffer.from('buffer');
for (var value of buf.values()) {
console.log(value);
}
// prints:
// 98
// 117
// 102
// 102
// 101
// 114
for (var value of buf) {
console.log(value);
}
// prints:
// 98
// 117
// 102
// 102
// 101
// 114
buf.write(string[, offset[, length]][, encoding])#
Writes string
to the Buffer at offset
using the given encoding
.
The length
parameter is the number of bytes to write. If the Buffer did not
contain enough space to fit the entire string, only a partial amount of the
string will be written however, it will not write only partially encoded
characters.
const buf = Buffer.allocUnsafe(256);
const len = buf.write('\u00bd + \u00bc = \u00be', 0);
console.log(`${len} bytes: ${buf.toString('utf8', 0, len)}`);
// Prints: 12 bytes: ½ + ¼ = ¾
buf.writeDoubleBE(value, offset[, noAssert])#
buf.writeDoubleLE(value, offset[, noAssert])#
Writes value
to the Buffer at the specified offset
with specified endian
format (writeDoubleBE()
writes big endian, writeDoubleLE()
writes little
endian). The value
argument should be a valid 64-bit double. Behavior is
not defined when value
is anything other than a 64-bit double.
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(8);
buf.writeDoubleBE(0xdeadbeefcafebabe, 0);
console.log(buf);
// Prints: <Buffer 43 eb d5 b7 dd f9 5f d7>
buf.writeDoubleLE(0xdeadbeefcafebabe, 0);
console.log(buf);
// Prints: <Buffer d7 5f f9 dd b7 d5 eb 43>
buf.writeFloatBE(value, offset[, noAssert])#
buf.writeFloatLE(value, offset[, noAssert])#
Writes value
to the Buffer at the specified offset
with specified endian
format (writeFloatBE()
writes big endian, writeFloatLE()
writes little
endian). Behavior is not defined when value
is anything other than a 32-bit
float.
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(4);
buf.writeFloatBE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer 4f 4a fe bb>
buf.writeFloatLE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer bb fe 4a 4f>
buf.writeInt8(value, offset[, noAssert])#
Writes value
to the Buffer at the specified offset
. The value
should be a
valid signed 8-bit integer. Behavior is not defined when value
is anything
other than a signed 8-bit integer.
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
The value
is interpreted and written as a two's complement signed integer.
const buf = Buffer.allocUnsafe(2);
buf.writeInt8(2, 0);
buf.writeInt8(-2, 1);
console.log(buf);
// Prints: <Buffer 02 fe>
buf.writeInt16BE(value, offset[, noAssert])#
buf.writeInt16LE(value, offset[, noAssert])#
Writes value
to the Buffer at the specified offset
with specified endian
format (writeInt16BE()
writes big endian, writeInt16LE()
writes little
endian). The value
should be a valid signed 16-bit integer. Behavior is
not defined when value
is anything other than a signed 16-bit integer.
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
The value
is interpreted and written as a two's complement signed integer.
const buf = Buffer.allocUnsafe(4);
buf.writeInt16BE(0x0102,0);
buf.writeInt16LE(0x0304,2);
console.log(buf);
// Prints: <Buffer 01 02 04 03>
buf.writeInt32BE(value, offset[, noAssert])#
buf.writeInt32LE(value, offset[, noAssert])#
Writes value
to the Buffer at the specified offset
with specified endian
format (writeInt32BE()
writes big endian, writeInt32LE()
writes little
endian). The value
should be a valid signed 32-bit integer. Behavior is
not defined when value
is anything other than a signed 32-bit integer.
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
The value
is interpreted and written as a two's complement signed integer.
const buf = Buffer.allocUnsafe(8);
buf.writeInt32BE(0x01020304,0);
buf.writeInt32LE(0x05060708,4);
console.log(buf);
// Prints: <Buffer 01 02 03 04 08 07 06 05>
buf.writeIntBE(value, offset, byteLength[, noAssert])#
buf.writeIntLE(value, offset, byteLength[, noAssert])#
Writes value
to the Buffer at the specified offset
and byteLength
.
Supports up to 48 bits of accuracy. For example:
const buf1 = Buffer.allocUnsafe(6);
buf1.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf1);
// Prints: <Buffer 12 34 56 78 90 ab>
const buf2 = Buffer.allocUnsafe(6);
buf2.writeUIntLE(0x1234567890ab, 0, 6);
console.log(buf2);
// Prints: <Buffer ab 90 78 56 34 12>
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
Behavior is not defined when value
is anything other than an integer.
buf.writeUInt8(value, offset[, noAssert])#
Writes value
to the Buffer at the specified offset
. The value
should be a
valid unsigned 8-bit integer. Behavior is not defined when value
is anything
other than an unsigned 8-bit integer.
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(4);
buf.writeUInt8(0x3, 0);
buf.writeUInt8(0x4, 1);
buf.writeUInt8(0x23, 2);
buf.writeUInt8(0x42, 3);
console.log(buf);
// Prints: <Buffer 03 04 23 42>
buf.writeUInt16BE(value, offset[, noAssert])#
buf.writeUInt16LE(value, offset[, noAssert])#
Writes value
to the Buffer at the specified offset
with specified endian
format (writeUInt16BE()
writes big endian, writeUInt16LE()
writes little
endian). The value
should be a valid unsigned 16-bit integer. Behavior is
not defined when value
is anything other than an unsigned 16-bit integer.
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(4);
buf.writeUInt16BE(0xdead, 0);
buf.writeUInt16BE(0xbeef, 2);
console.log(buf);
// Prints: <Buffer de ad be ef>
buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);
console.log(buf);
// Prints: <Buffer ad de ef be>
buf.writeUInt32BE(value, offset[, noAssert])#
buf.writeUInt32LE(value, offset[, noAssert])#
Writes value
to the Buffer at the specified offset
with specified endian
format (writeUInt32BE()
writes big endian, writeUInt32LE()
writes little
endian). The value
should be a valid unsigned 32-bit integer. Behavior is
not defined when value
is anything other than an unsigned 32-bit integer.
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(4);
buf.writeUInt32BE(0xfeedface, 0);
console.log(buf);
// Prints: <Buffer fe ed fa ce>
buf.writeUInt32LE(0xfeedface, 0);
console.log(buf);
// Prints: <Buffer ce fa ed fe>
buf.writeUIntBE(value, offset, byteLength[, noAssert])#
buf.writeUIntLE(value, offset, byteLength[, noAssert])#
Writes value
to the Buffer at the specified offset
and byteLength
.
Supports up to 48 bits of accuracy. For example:
const buf = Buffer.allocUnsafe(6);
buf.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>
Set noAssert
to true to skip validation of value
and offset
. This means
that value
may be too large for the specific function and offset
may be
beyond the end of the Buffer leading to the values being silently dropped. This
should not be used unless you are certain of correctness.
Behavior is not defined when value
is anything other than an unsigned integer.
buffer.INSPECT_MAX_BYTES#
- <Number> Default: 50
Returns the maximum number of bytes that will be returned when
buffer.inspect()
is called. This can be overridden by user modules. See
util.inspect()
for more details on buffer.inspect()
behavior.
Note that this is a property on the buffer
module as returned by
require('buffer')
, not on the Buffer global or a Buffer instance.
Class: SlowBuffer#
Returns an un-pooled Buffer
.
In order to avoid the garbage collection overhead of creating many individually
allocated Buffers, by default allocations under 4KB are sliced from a single
larger allocated object. This approach improves both performance and memory
usage since v8 does not need to track and cleanup as many Persistent
objects.
In the case where a developer may need to retain a small chunk of memory from a
pool for an indeterminate amount of time, it may be appropriate to create an
un-pooled Buffer instance using SlowBuffer
then copy out the relevant bits.
// need to keep around a few small chunks of memory
const store = [];
socket.on('readable', () => {
var data = socket.read();
// allocate for retained data
var sb = SlowBuffer(10);
// copy the data into the new allocation
data.copy(sb, 0, 0, 10);
store.push(sb);
});
Use of SlowBuffer
should be used only as a last resort after a developer
has observed undue memory retention in their applications.
new SlowBuffer(size)#
size
Number
Allocates a new SlowBuffer
of size
bytes. The size
must be less than
or equal to the value of require('buffer').kMaxLength
(on 64-bit
architectures, kMaxLength
is (2^31)-1
). Otherwise, a RangeError
is
thrown. If a size
less than 0 is specified, a zero-length SlowBuffer
will be
created.
The underlying memory for SlowBuffer
instances is not initialized. The
contents of a newly created SlowBuffer
are unknown and could contain
sensitive data. Use buf.fill(0)
to initialize a SlowBuffer
to zeroes.
const SlowBuffer = require('buffer').SlowBuffer;
const buf = new SlowBuffer(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>
Child Process#
Stability: 2 - Stable
The child_process
module provides the ability to spawn child processes in
a manner that is similar, but not identical, to popen(3)
. This capability
is primarily provided by the child_process.spawn()
function:
const spawn = require('child_process').spawn;
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.log(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
By default, pipes for stdin
, stdout
and stderr
are established between
the parent Node.js process and the spawned child. It is possible to stream data
through these pipes in a non-blocking way. Note, however, that some programs
use line-buffered I/O internally. While that does not affect Node.js, it can
mean that data sent to the child process may not be immediately consumed.
The child_process.spawn()
method spawns the child process asynchronously,
without blocking the Node.js event loop. The child_process.spawnSync()
function provides equivalent functionality in a synchronous manner that blocks
the event loop until the spawned process either exits or is terminated.
For convenience, the child_process
module provides a handful of synchronous
and asynchronous alternatives to child_process.spawn()
and
child_process.spawnSync()
. Note that each of these alternatives are
implemented on top of child_process.spawn()
or child_process.spawnSync()
.
child_process.exec()
: spawns a shell and runs a command within that shell, passing thestdout
andstderr
to a callback function when complete.child_process.execFile()
: similar tochild_process.exec()
except that it spawns the command directly without first spawning a shell.child_process.fork()
: spawns a new Node.js process and invokes a specified module with an IPC communication channel established that allows sending messages between parent and child.child_process.execSync()
: a synchronous version ofchild_process.exec()
that will block the Node.js event loop.child_process.execFileSync()
: a synchronous version ofchild_process.execFile()
that will block the Node.js event loop.
For certain use cases, such as automating shell scripts, the synchronous counterparts may be more convenient. In many cases, however, the synchronous methods can have significant impact on performance due to stalling the event loop while spawned processes complete.
Asynchronous Process Creation#
The child_process.spawn()
, child_process.fork()
, child_process.exec()
,
and child_process.execFile()
methods all follow the idiomatic asynchronous
programming pattern typical of other Node.js APIs.
Each of the methods returns a ChildProcess
instance. These objects
implement the Node.js EventEmitter
API, allowing the parent process to
register listener functions that are called when certain events occur during
the life cycle of the child process.
The child_process.exec()
and child_process.execFile()
methods additionally
allow for an optional callback
function to be specified that is invoked
when the child process terminates.
Spawning .bat
and .cmd
files on Windows#
The importance of the distinction between child_process.exec()
and
child_process.execFile()
can vary based on platform. On Unix-type operating
systems (Unix, Linux, OSX) child_process.execFile()
can be more efficient
because it does not spawn a shell. On Windows, however, .bat
and .cmd
files are not executable on their own without a terminal, and therefore cannot
be launched using child_process.execFile()
. When running on Windows, .bat
and .cmd
files can be invoked using child_process.spawn()
with the shell
option set, with child_process.exec()
, or by spawning cmd.exe
and passing
the .bat
or .cmd
file as an argument (which is what the shell
option and
child_process.exec()
do).
// On Windows Only ...
const spawn = require('child_process').spawn;
const bat = spawn('cmd.exe', ['/c', 'my.bat']);
bat.stdout.on('data', (data) => {
console.log(data);
});
bat.stderr.on('data', (data) => {
console.log(data);
});
bat.on('exit', (code) => {
console.log(`Child exited with code ${code}`);
});
// OR...
const exec = require('child_process').exec;
exec('my.bat', (err, stdout, stderr) => {
if (err) {
console.error(err);
return;
}
console.log(stdout);
});
child_process.exec(command[, options][, callback])#
command
<String> The command to run, with space-separated argumentsoptions
<Object>cwd
<String> Current working directory of the child processenv
<Object> Environment key-value pairsencoding
<String> (Default: 'utf8')shell
<String> Shell to execute the command with (Default: '/bin/sh' on UNIX, 'cmd.exe' on Windows, The shell should understand the-c
switch on UNIX or/s /c
on Windows. On Windows, command line parsing should be compatible withcmd.exe
.)timeout
<Number> (Default: 0)maxBuffer
<Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed (Default:200*1024
)killSignal
<String> (Default: 'SIGTERM')uid
<Number> Sets the user identity of the process. (See setuid(2).)gid
<Number> Sets the group identity of the process. (See setgid(2).)
callback
<Function> called with the output when process terminates- Return: <ChildProcess>
Spawns a shell then executes the command
within that shell, buffering any
generated output.
const exec = require('child_process').exec;
const child = exec('cat *.js bad_file | wc -l',
(error, stdout, stderr) => {
console.log(`stdout: ${stdout}`);
console.log(`stderr: ${stderr}`);
if (error !== null) {
console.log(`exec error: ${error}`);
}
});
If a callback
function is provided, it is called with the arguments
(error, stdout, stderr)
. On success, error
will be null
. On error,
error
will be an instance of Error
. The error.code
property will be
the exit code of the child process while error.signal
will be set to the
signal that terminated the process. Any exit code other than 0
is considered
to be an error.
The stdout
and stderr
arguments passed to the callback will contain the
stdout and stderr output of the child process. By default, Node.js will decode
the output as UTF-8 and pass strings to the callback. The encoding
option
can be used to specify the character encoding used to decode the stdout and
stderr output. If encoding
is 'buffer'
, Buffer
objects will be passed to
the callback instead.
The options
argument may be passed as the second argument to customize how
the process is spawned. The default options are:
{
encoding: 'utf8',
timeout: 0,
maxBuffer: 200*1024,
killSignal: 'SIGTERM',
cwd: null,
env: null
}
If timeout
is greater than 0
, the parent will send the the signal
identified by the killSignal
property (the default is 'SIGTERM'
) if the
child runs longer than timeout
milliseconds.
Note: Unlike the exec()
POSIX system call, child_process.exec()
does not
replace the existing process and uses a shell to execute the command.
child_process.execFile(file[, args][, options][, callback])#
file
<String> The name or path of the executable file to runargs
<Array> List of string argumentsoptions
<Object>cwd
<String> Current working directory of the child processenv
<Object> Environment key-value pairsencoding
<String> (Default: 'utf8')timeout
<Number> (Default: 0)maxBuffer
<Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed (Default: 200*1024)killSignal
<String> (Default: 'SIGTERM')uid
<Number> Sets the user identity of the process. (See setuid(2).)gid
<Number> Sets the group identity of the process. (See setgid(2).)
callback
<Function> called with the output when process terminates- Return: <ChildProcess>
The child_process.execFile()
function is similar to child_process.exec()
except that it does not spawn a shell. Rather, the specified executable file
is spawned directly as a new process making it slightly more efficient than
child_process.exec()
.
The same options as child_process.exec()
are supported. Since a shell is not
spawned, behaviors such as I/O redirection and file globbing are not supported.
const execFile = require('child_process').execFile;
const child = execFile('node', ['--version'], (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});
The stdout
and stderr
arguments passed to the callback will contain the
stdout and stderr output of the child process. By default, Node.js will decode
the output as UTF-8 and pass strings to the callback. The encoding
option
can be used to specify the character encoding used to decode the stdout and
stderr output. If encoding
is 'buffer'
, Buffer
objects will be passed to
the callback instead.
child_process.fork(modulePath[, args][, options])#
modulePath
<String> The module to run in the childargs
<Array> List of string argumentsoptions
<Object>cwd
<String> Current working directory of the child processenv
<Object> Environment key-value pairsexecPath
<String> Executable used to create the child processexecArgv
<Array> List of string arguments passed to the executable (Default:process.execArgv
)silent
<Boolean> If true, stdin, stdout, and stderr of the child will be piped to the parent, otherwise they will be inherited from the parent, see the'pipe'
and'inherit'
options forchild_process.spawn()
'sstdio
for more details (default is false)uid
<Number> Sets the user identity of the process. (See setuid(2).)gid
<Number> Sets the group identity of the process. (See setgid(2).)
- Return: <ChildProcess>
The child_process.fork()
method is a special case of
child_process.spawn()
used specifically to spawn new Node.js processes.
Like child_process.spawn()
, a ChildProcess
object is returned. The returned
ChildProcess
will have an additional communication channel built-in that
allows messages to be passed back and forth between the parent and child. See
ChildProcess#send()
for details.
It is important to keep in mind that spawned Node.js child processes are independent of the parent with exception of the IPC communication channel that is established between the two. Each process has it's own memory, with their own V8 instances. Because of the additional resource allocations required, spawning a large number of child Node.js processes is not recommended.
By default, child_process.fork()
will spawn new Node.js instances using the
process.execPath
of the parent process. The execPath
property in the
options
object allows for an alternative execution path to be used.
Node.js processes launched with a custom execPath
will communicate with the
parent process using the file descriptor (fd) identified using the
environment variable NODE_CHANNEL_FD
on the child process. The input and
output on this fd is expected to be line delimited JSON objects.
Note: Unlike the fork()
POSIX system call, child_process.fork()
does
not clone the current process.
child_process.spawn(command[, args][, options])#
command
<String> The command to runargs
<Array> List of string argumentsoptions
<Object>cwd
<String> Current working directory of the child processenv
<Object> Environment key-value pairsstdio
<Array> | <String> Child's stdio configuration. (Seeoptions.stdio
)detached
<Boolean> Prepare child to run independently of its parent process. Specific behavior depends on the platform, seeoptions.detached
)uid
<Number> Sets the user identity of the process. (See setuid(2).)gid
<Number> Sets the group identity of the process. (See setgid(2).)shell
<Boolean> | <String> Iftrue
, runscommand
inside of a shell. Uses '/bin/sh' on UNIX, and 'cmd.exe' on Windows. A different shell can be specified as a string. The shell should understand the-c
switch on UNIX, or/s /c
on Windows. Defaults tofalse
(no shell).
- return: <ChildProcess>
The child_process.spawn()
method spawns a new process using the given
command
, with command line arguments in args
. If omitted, args
defaults
to an empty array.
A third argument may be used to specify additional options, with these defaults:
{
cwd: undefined,
env: process.env
}
Use cwd
to specify the working directory from which the process is spawned.
If not given, the default is to inherit the current working directory.
Use env
to specify environment variables that will be visible to the new
process, the default is process.env
.
Example of running ls -lh /usr
, capturing stdout
, stderr
, and the
exit code:
const spawn = require('child_process').spawn;
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.log(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
Example: A very elaborate way to run 'ps ax | grep ssh'
const spawn = require('child_process').spawn;
const ps = spawn('ps', ['ax']);
const grep = spawn('grep', ['ssh']);
ps.stdout.on('data', (data) => {
grep.stdin.write(data);
});
ps.stderr.on('data', (data) => {
console.log(`ps stderr: ${data}`);
});
ps.on('close', (code) => {
if (code !== 0) {
console.log(`ps process exited with code ${code}`);
}
grep.stdin.end();
});
grep.stdout.on('data', (data) => {
console.log(`${data}`);
});
grep.stderr.on('data', (data) => {
console.log(`grep stderr: ${data}`);
});
grep.on('close', (code) => {
if (code !== 0) {
console.log(`grep process exited with code ${code}`);
}
});
Example of checking for failed exec:
const spawn = require('child_process').spawn;
const child = spawn('bad_command');
child.on('error', (err) => {
console.log('Failed to start child process.');
});
options.detached#
On Windows, setting options.detached
to true
makes it possible for the
child process to continue running after the parent exits. The child will have
its own console window. Once enabled for a child process, it cannot be
disabled.
On non-Windows platforms, if options.detached
is set to true
, the child
process will be made the leader of a new process group and session. Note that
child processes may continue running after the parent exits regardless of
whether they are detached or not. See setsid(2)
for more information.
By default, the parent will wait for the detached child to exit. To prevent
the parent from waiting for a given child
, use the child.unref()
method.
Doing so will cause the parent's event loop to not include the child in its
reference count, allowing the parent to exit independently of the child, unless
there is an established IPC channel between the child and parent.
When using the detached
option to start a long-running process, the process
will not stay running in the background after the parent exits unless it is
provided with a stdio
configuration that is not connected to the parent.
If the parent's stdio
is inherited, the child will remain attached to the
controlling terminal.
Example of a long-running process, by detaching and also ignoring its parent
stdio
file descriptors, in order to ignore the parent's termination:
const spawn = require('child_process').spawn;
const child = spawn(process.argv[0], ['child_program.js'], {
detached: true,
stdio: ['ignore']
});
child.unref();
Alternatively one can redirect the child process' output into files:
const fs = require('fs');
const spawn = require('child_process').spawn;
const out = fs.openSync('./out.log', 'a');
const err = fs.openSync('./out.log', 'a');
const child = spawn('prg', [], {
detached: true,
stdio: [ 'ignore', out, err ]
});
child.unref();
options.stdio#
The options.stdio
option is used to configure the pipes that are established
between the parent and child process. By default, the child's stdin, stdout,
and stderr are redirected to corresponding child.stdin
, child.stdout
, and
child.stderr
streams on the ChildProcess
object. This is equivalent to
setting the options.stdio
equal to ['pipe', 'pipe', 'pipe']
.
For convenience, options.stdio
may be one of the following strings:
'pipe'
- equivalent to['pipe', 'pipe', 'pipe']
(the default)'ignore'
- equivalent to['ignore', 'ignore', 'ignore']
'inherit'
- equivalent to[process.stdin, process.stdout, process.stderr]
or[0,1,2]
Otherwise, the value of option.stdio
is an array where each index corresponds
to an fd in the child. The fds 0, 1, and 2 correspond to stdin, stdout,
and stderr, respectively. Additional fds can be specified to create additional
pipes between the parent and child. The value is one of the following:
'pipe'
- Create a pipe between the child process and the parent process. The parent end of the pipe is exposed to the parent as a property on thechild_process
object asChildProcess.stdio[fd]
. Pipes created for fds 0 - 2 are also available as ChildProcess.stdin, ChildProcess.stdout and ChildProcess.stderr, respectively.'ipc'
- Create an IPC channel for passing messages/file descriptors between parent and child. A ChildProcess may have at most one IPC stdio file descriptor. Setting this option enables the ChildProcess.send() method. If the child writes JSON messages to this file descriptor, theChildProcess.on('message')
event handler will be triggered in the parent. If the child is a Node.js process, the presence of an IPC channel will enableprocess.send()
,process.disconnect()
,process.on('disconnect')
, andprocess.on('message')
within the child.'ignore'
- Instructs Node.js to ignore the fd in the child. While Node.js will always open fds 0 - 2 for the processes it spawns, setting the fd to'ignore'
will cause Node.js to open/dev/null
and attach it to the child's fd.Stream
object - Share a readable or writable stream that refers to a tty, file, socket, or a pipe with the child process. The stream's underlying file descriptor is duplicated in the child process to the fd that corresponds to the index in thestdio
array. Note that the stream must have an underlying descriptor (file streams do not until the'open'
event has occurred).- Positive integer - The integer value is interpreted as a file descriptor
that is is currently open in the parent process. It is shared with the child
process, similar to how
Stream
objects can be shared. null
,undefined
- Use default value. For stdio fds 0, 1 and 2 (in other words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the default is'ignore'
.
Example:
const spawn = require('child_process').spawn;
// Child will use parent's stdios
spawn('prg', [], { stdio: 'inherit' });
// Spawn child sharing only stderr
spawn('prg', [], { stdio: ['pipe', 'pipe', process.stderr] });
// Open an extra fd=4, to interact with programs presenting a
// startd-style interface.
spawn('prg', [], { stdio: ['pipe', null, null, null, 'pipe'] });
It is worth noting that when an IPC channel is established between the
parent and child processes, and the child is a Node.js process, the child
is launched with the IPC channel unreferenced (using unref()
) until the
child registers an event handler for the process.on('disconnected')
event.
This allows the child to exit normally without the process being held open
by the open IPC channel.
See also: child_process.exec()
and child_process.fork()
Synchronous Process Creation#
The child_process.spawnSync()
, child_process.execSync()
, and
child_process.execFileSync()
methods are synchronous and WILL block
the Node.js event loop, pausing execution of any additional code until the
spawned process exits.
Blocking calls like these are mostly useful for simplifying general purpose scripting tasks and for simplifying the loading/processing of application configuration at startup.
child_process.execFileSync(file[, args][, options])#
file
<String> The name or path of the executable file to runargs
<Array> List of string argumentsoptions
<Object>cwd
<String> Current working directory of the child processinput
<String> | <Buffer> The value which will be passed as stdin to the spawned process- supplying this value will override
stdio[0]
- supplying this value will override
stdio
<Array> Child's stdio configuration. (Default: 'pipe')stderr
by default will be output to the parent process' stderr unlessstdio
is specified
env
<Object> Environment key-value pairsuid
<Number> Sets the user identity of the process. (See setuid(2).)gid
<Number> Sets the group identity of the process. (See setgid(2).)timeout
<Number> In milliseconds the maximum amount of time the process is allowed to run. (Default: undefined)killSignal
<String> The signal value to be used when the spawned process will be killed. (Default: 'SIGTERM')maxBuffer
<Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killedencoding
<String> The encoding used for all stdio inputs and outputs. (Default: 'buffer')
- return: <Buffer> | <String> The stdout from the command
The child_process.execFileSync()
method is generally identical to
child_process.execFile()
with the exception that the method will not return
until the child process has fully closed. When a timeout has been encountered
and killSignal
is sent, the method won't return until the process has
completely exited. Note that if the child process intercepts and handles
the SIGTERM
signal and does not exit, the parent process will still wait
until the child process has exited.
If the process times out, or has a non-zero exit code, this method will
throw. The Error
object will contain the entire result from
child_process.spawnSync()
child_process.execSync(command[, options])#
command
<String> The command to runoptions
<Object>cwd
<String> Current working directory of the child processinput
<String> | <Buffer> The value which will be passed as stdin to the spawned process- supplying this value will override
stdio[0]
- supplying this value will override
stdio
<Array> Child's stdio configuration. (Default: 'pipe')stderr
by default will be output to the parent process' stderr unlessstdio
is specified
env
<Object> Environment key-value pairsshell
<String> Shell to execute the command with (Default: '/bin/sh' on UNIX, 'cmd.exe' on Windows, The shell should understand the-c
switch on UNIX or/s /c
on Windows. On Windows, command line parsing should be compatible withcmd.exe
.)uid
<Number> Sets the user identity of the process. (See setuid(2).)gid
<Number> Sets the group identity of the process. (See setgid(2).)timeout
<Number> In milliseconds the maximum amount of time the process is allowed to run. (Default: undefined)killSignal
<String> The signal value to be used when the spawned process will be killed. (Default: 'SIGTERM')maxBuffer
<Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killedencoding
<String> The encoding used for all stdio inputs and outputs. (Default: 'buffer')
- return: <Buffer> | <String> The stdout from the command
The child_process.execSync()
method is generally identical to
child_process.exec()
with the exception that the method will not return until
the child process has fully closed. When a timeout has been encountered and
killSignal
is sent, the method won't return until the process has completely
exited. Note that if the child process intercepts and handles the SIGTERM
signal and doesn't exit, the parent process will wait until the child
process has exited.
If the process times out, or has a non-zero exit code, this method will
throw. The Error
object will contain the entire result from
child_process.spawnSync()
child_process.spawnSync(command[, args][, options])#
command
<String> The command to runargs
<Array> List of string argumentsoptions
<Object>cwd
<String> Current working directory of the child processinput
<String> | <Buffer> The value which will be passed as stdin to the spawned process- supplying this value will override
stdio[0]
- supplying this value will override
stdio
<Array> Child's stdio configuration.env
<Object> Environment key-value pairsuid
<Number> Sets the user identity of the process. (See setuid(2).)gid
<Number> Sets the group identity of the process. (See setgid(2).)timeout
<Number> In milliseconds the maximum amount of time the process is allowed to run. (Default: undefined)killSignal
<String> The signal value to be used when the spawned process will be killed. (Default: 'SIGTERM')maxBuffer
<Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killedencoding
<String> The encoding used for all stdio inputs and outputs. (Default: 'buffer')shell
<Boolean> | <String> Iftrue
, runscommand
inside of a shell. Uses '/bin/sh' on UNIX, and 'cmd.exe' on Windows. A different shell can be specified as a string. The shell should understand the-c
switch on UNIX, or/s /c
on Windows. Defaults tofalse
(no shell).
- return: <Object>
pid
<Number> Pid of the child processoutput
<Array> Array of results from stdio outputstdout
<Buffer> | <String> The contents ofoutput[1]
stderr
<Buffer> | <String> The contents ofoutput[2]
status
<Number> The exit code of the child processsignal
<String> The signal used to kill the child processerror
<Error> The error object if the child process failed or timed out
The child_process.spawnSync()
method is generally identical to
child_process.spawn()
with the exception that the function will not return
until the child process has fully closed. When a timeout has been encountered
and killSignal
is sent, the method won't return until the process has
completely exited. Note that if the process intercepts and handles the
SIGTERM
signal and doesn't exit, the parent process will wait until the child
process has exited.
Class: ChildProcess#
Instances of the ChildProcess
class are EventEmitters
that represent
spawned child processes.
Instances of ChildProcess
are not intended to be created directly. Rather,
use the child_process.spawn()
, child_process.exec()
,
child_process.execFile()
, or child_process.fork()
methods to create
instances of ChildProcess
.
Event: 'close'#
The 'close'
event is emitted when the stdio streams of a child process have
been closed. This is distinct from the 'exit'
event, since multiple
processes might share the same stdio streams.
Event: 'disconnect'#
The 'disconnect'
event is emitted after calling the
ChildProcess.disconnect()
method in the parent or child process. After
disconnecting it is no longer possible to send or receive messages, and the
ChildProcess.connected
property is false.
Event: 'error'#
err
<Error> the error.
The 'error'
event is emitted whenever:
- The process could not be spawned, or
- The process could not be killed, or
- Sending a message to the child process failed.
Note that the 'exit'
event may or may not fire after an error has occurred.
If you are listening to both the 'exit'
and 'error'
events, it is important
to guard against accidentally invoking handler functions multiple times.
See also ChildProcess#kill()
and ChildProcess#send()
.
Event: 'exit'#
The 'exit'
event is emitted after the child process ends. If the process
exited, code
is the final exit code of the process, otherwise null
. If the
process terminated due to receipt of a signal, signal
is the string name of
the signal, otherwise null
. One of the two will always be non-null.
Note that when the 'exit'
event is triggered, child process stdio streams
might still be open.
Also, note that Node.js establishes signal handlers for SIGINT
and
SIGTERM
and Node.js processes will not terminate immediately due to receipt
of those signals. Rather, Node.js will perform a sequence of cleanup actions
and then will re-raise the handled signal.
See waitpid(2)
.
Event: 'message'#
message
<Object> a parsed JSON object or primitive value.sendHandle
<Handle> anet.Socket
ornet.Server
object, or undefined.
The 'message'
event is triggered when a child process uses process.send()
to send messages.
child.connected#
- <Boolean> Set to false after
.disconnect
is called
The child.connected
property indicates whether it is still possible to send
and receive messages from a child process. When child.connected
is false, it
is no longer possible to send or receive messages.
child.disconnect()#
Closes the IPC channel between parent and child, allowing the child to exit
gracefully once there are no other connections keeping it alive. After calling
this method the child.connected
and process.connected
properties in both
the parent and child (respectively) will be set to false
, and it will be no
longer possible to pass messages between the processes.
The 'disconnect'
event will be emitted when there are no messages in the
process of being received. This will most often be triggered immediately after
calling child.disconnect()
.
Note that when the child process is a Node.js instance (e.g. spawned using
child_process.fork()
), the process.disconnect()
method can be invoked
within the child process to close the IPC channel as well.
child.kill([signal])#
signal
<String>
The child.kill()
methods sends a signal to the child process. If no argument
is given, the process will be sent the 'SIGTERM'
signal. See signal(7)
for
a list of available signals.
const spawn = require('child_process').spawn;
const grep = spawn('grep', ['ssh']);
grep.on('close', (code, signal) => {
console.log(
`child process terminated due to receipt of signal ${signal}`);
});
// Send SIGHUP to process
grep.kill('SIGHUP');
The ChildProcess
object may emit an 'error'
event if the signal cannot be
delivered. Sending a signal to a child process that has already exited is not
an error but may have unforeseen consequences. Specifically, if the process
identifier (PID) has been reassigned to another process, the signal will be
delivered to that process instead which can have unexpected results.
Note that while the function is called kill
, the signal delivered to the
child process may not actually terminate the process.
See kill(2)
for reference.
Also note: on Linux, child processes of child processes will not be terminated
when attempting to kill their parent. This is likely to happen when running a
new process in a shell or with use of the shell
option of ChildProcess
, such
as in this example:
'use strict';
const spawn = require('child_process').spawn;
let child = spawn('sh', ['-c',
`node -e "setInterval(() => {
console.log(process.pid + 'is alive')
}, 500);"`
], {
stdio: ['inherit', 'inherit', 'inherit']
});
setTimeout(() => {
child.kill(); // does not terminate the node process in the shell
}, 2000);
child.pid#
- <Number> Integer
Returns the process identifier (PID) of the child process.
Example:
const spawn = require('child_process').spawn;
const grep = spawn('grep', ['ssh']);
console.log(`Spawned child pid: ${grep.pid}`);
grep.stdin.end();
child.send(message[, sendHandle[, options]][, callback])#
message
<Object>sendHandle
<Handle>options
<Object>callback
<Function>- Return: <Boolean>
When an IPC channel has been established between the parent and child (
i.e. when using child_process.fork()
), the child.send()
method can be
used to send messages to the child process. When the child process is a Node.js
instance, these messages can be received via the process.on('message')
event.
For example, in the parent script:
const cp = require('child_process');
const n = cp.fork(`${__dirname}/sub.js`);
n.on('message', (m) => {
console.log('PARENT got message:', m);
});
n.send({ hello: 'world' });
And then the child script, 'sub.js'
might look like this:
process.on('message', (m) => {
console.log('CHILD got message:', m);
});
process.send({ foo: 'bar' });
Child Node.js processes will have a process.send()
method of their own that
allows the child to send messages back to the parent.
There is a special case when sending a {cmd: 'NODE_foo'}
message. All messages
containing a NODE_
prefix in its cmd
property are considered to be reserved
for use within Node.js core and will not be emitted in the child's
process.on('message')
event. Rather, such messages are emitted using the
process.on('internalMessage')
event and are consumed internally by Node.js.
Applications should avoid using such messages or listening for
'internalMessage'
events as it is subject to change without notice.
The optional sendHandle
argument that may be passed to child.send()
is for
passing a TCP server or socket object to the child process. The child will
receive the object as the second argument passed to the callback function
registered on the process.on('message')
event.
The options
argument, if present, is an object used to parameterize the
sending of certain types of handles. options
supports the following
properties:
keepOpen
- A Boolean value that can be used when passing instances ofnet.Socket
. Whentrue
, the socket is kept open in the sending process. Defaults tofalse
.
The optional callback
is a function that is invoked after the message is
sent but before the child may have received it. The function is called with a
single argument: null
on success, or an Error
object on failure.
If no callback
function is provided and the message cannot be sent, an
'error'
event will be emitted by the ChildProcess
object. This can happen,
for instance, when the child process has already exited.
child.send()
will return false
if the channel has closed or when the
backlog of unsent messages exceeds a threshold that makes it unwise to send
more. Otherwise, the method returns true
. The callback
function can be
used to implement flow control.
Example: sending a server object#
The sendHandle
argument can be used, for instance, to pass the handle of
a TCP server object to the child process as illustrated in the example below:
const child = require('child_process').fork('child.js');
// Open up the server object and send the handle.
const server = require('net').createServer();
server.on('connection', (socket) => {
socket.end('handled by parent');
});
server.listen(1337, () => {
child.send('server', server);
});
The child would then receive the server object as:
process.on('message', (m, server) => {
if (m === 'server') {
server.on('connection', (socket) => {
socket.end('handled by child');
});
}
});
Once the server is now shared between the parent and child, some connections can be handled by the parent and some by the child.
While the example above uses a server created using the net
module, dgram
module servers use exactly the same workflow with the exceptions of listening on
a 'message'
event instead of 'connection'
and using server.bind
instead of
server.listen
. This is, however, currently only supported on UNIX platforms.
Example: sending a socket object#
Similarly, the sendHandler
argument can be used to pass the handle of a
socket to the child process. The example below spawns two children that each
handle connections with "normal" or "special" priority:
const normal = require('child_process').fork('child.js', ['normal']);
const special = require('child_process').fork('child.js', ['special']);
// Open up the server and send sockets to child
const server = require('net').createServer();
server.on('connection', (socket) => {
// If this is special priority
if (socket.remoteAddress === '74.125.127.100') {
special.send('socket', socket);
return;
}
// This is normal priority
normal.send('socket', socket);
});
server.listen(1337);
The child.js
would receive the socket handle as the second argument passed
to the event callback function:
process.on('message', (m, socket) => {
if (m === 'socket') {
socket.end(`Request handled with ${process.argv[2]} priority`);
}
});
Once a socket has been passed to a child, the parent is no longer capable of
tracking when the socket is destroyed. To indicate this, the .connections
property becomes null
. It is recommended not to use .maxConnections
when
this occurs.
Note: this function uses JSON.stringify()
internally to serialize the
message
.
child.stderr#
A Readable Stream
that represents the child process's stderr
.
If the child was spawned with stdio[2]
set to anything other than 'pipe'
,
then this will be undefined
.
child.stderr
is an alias for child.stdio[2]
. Both properties will refer to
the same value.