Node.js

software-development

Articles
Resources
MVC Frameworks
ExpressJS
meteor
MEAN
Deployment
Clustering
Running NodeJS on Windows
Testing Frameworks
Debugging
Error Handling
Working with files (including file upload and streaming)
MongoDB
DNS
Unanswered Questions
Tools
Shell program example
EventEmitters
Modules

https://www.getpostman.com/

npm
nvm

cometd

// server.js:

var http = require('http');
var server = http.createServer(function (request, response) {
  response.writeHead(200, {'Content-Type': 'text/plain'});
  response.write('Hello World\n');
  response.end();
});
server.listen(80);

console.log('Server running on port 80');

We have to explicitly invoke response.end() to end the connection because Node 
allows us to keep a connection open and pass data back and forth.

The above code requires the http module.  The createServer method takes a 
function which takes 2 parameters, the request object and the response object.  
The createServer method automatically pass the request object and the 
response object to this function.  Inside this function, we use the writeHead 
method to send the response status code, and the response headers.  Beside the 
Content-Type header, we can include other response headers as well.

The above code is very simple.  The rest is up to our creativity, which is limitless.

To start the node server:

node server.js

When we run the above statement, our program does not terminate because a 
Node program will always run until it is certain that no further events are 
possible.  In this case, the open http server is the source of events that will 
keep things going.

Another bare-bone, just slightly more complicated, server:

var http = require("http");
var url = require("url");

http.createServer(function(request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  var params = url.parse(request.url, true).query;
  var input = params.number;

  var numInput = new Number(input);
  var numOutput = new Number(Math.random() * numInput).toFixed(0);

  response.write(numOutput);
  response.end();
}).listen(80);
console.log("The server is now running on port 80");

The above code makes use of the url module to parse the URL.  The url module 
provides methods which allows us to extract different parts of a URL.  The 
querystring module can be used in turn to parse the query string for request 
parameters.

We can also use querystring to parse the body of a POST request for parameters.

Now, let write our router.js:

function route(pathname) {
}
exports.route = route;

Right now, our router file does not do anything yet, but let wire it up to our 
server.js file:

var http = require('http');
var router = require('./router');
var server = http.createServer(function (request, response) {
  route.route(pathname)
});
server.listen(80);

How we implement our router is up to our creativity.  Perhaps we can implement 
a method within our router object so that other module can invoke this method 
to register additional routes.  For our own application, perhaps we should call 
this method to register our own routes.  What is left inside the route function is 
the actual logic of mapping the pathname against the registry to determine the 
handler function for the request, invoking the handler function passing it 
appropriate parameters (the request object, the response object, etc).

With Node, we should think of everything as asynchronous / event-driven / 
non-blocking.  If any of our operation is blocking, it will block the entire 
server / program, and prevent us from serving simultaneous requests. Therefore, 
we need to replace those blocking operations with non-blocking alternative.  
Therefore, we cannot expect the response handlers to return content in a 
synchronous manner.  Therefore we must pass the request object and the response 
object to the request handler functions so that the request handler functions 
can at least use the response object to send the result when the result is 
available.  Alternatively, if we do not want to pass the request object and the 
response object to the request handler functions, we might be able to implement 
a synchronous-like operation if we mandate that all request handler functions 
must return a promise.

To handle POST request, we must add 2 listener functions to the request object:

var postData = "";
request.addListener("data", function(chunk) {
  postData += chunk;
});
request.addListener("end", function() {
  // This callback is invoked when all chunks of data have been received.
});

The reason why we have to process POST data in chunk is because POST request 
can be very large (nothing stops the user from entering text that is multiple 
megabytes in size, or it can be a reasonable file upload).  Handling the whole 
bulk of data in one go would result in a blocking operation.  To make this 
process non-blocking, Node provide the data to our code in chunks as they become 
available from the network.

The above code might be common for multiple request handlers, so the above code 
should be implement inside a common place, and the aggregated result should be 
passed on to the request handler functions.

Before getting the POST data, we should also include:

request.setEncoding('utf8');

If it is a post request, when we receive all the post data, in the above "end" 
handler, we can invoke the request handler function, passing it the request object, 
the response object, and the post data.

If it is not a post request, our router function can invoke the request handler function, 
passing it the request object, the response object, and an empty string.

To parse the post data into individual fields, we can use the querystring module:

var querystring = require('querystring');
querystring.parse(postData).text;

The above code assume that we have a field named text.

To handle file upload, we need to use the formidable module:

var formidable = require('formidable');
var http = require('http');
var sys = require('sys');

var form = new formidable.IncomingForm();
form.parse(request, function(err, fields, files) {
  response.writeHead(200, {'content-type': 'text/plain'});
  response.write('Received upload:\n\n');
  response.end(sys.inspect({fields: fields, files: files}));
});

In the above code, notice the the first parameter we provide to form.parse is 
the request object, and the second parameter we provide to form.parse is a 
function, and this function takes 3 parameters (err, fields, and files).  
form.parse will invoke this function passing in these 3 parameters.  Inside this 
function, we use sys.inspect to inspect the list of fields, and list of files.

Our function for displaying a file:

function show(response) {
  fs.readFile('/tmp/test.png', 'binary', function(error, file) {
    if (error) {
      response.writeHead(500, {'Content-Type': 'text/plain'});
      response.write(error + "\n");
      response.end();
    } else {
      response.writeHead(200, {'Content-Type': 'image/png'});
      response.write(file, 'binary');
      response.end();
    }
  });
}

In the above code, the 3rd parameter that we provide to fs.readFile is a 
callback function.  This function takes two parameters.  The first parameter is 
the error object, and the second parameter represent the content of the file.  
If there is no error, the else block is executed, and we use response.write to 
send the content of the file to the browser.

We need to use multipart/form-data in our HTML:

<form action='/upload' enctype="multipart/form-data" method="post">
  <input type="file" name="upload">
  <input type="submit" value="Upload"/>
</form>

In order to use the formidable module, we need to remove the "post data" code 
that we implemented earlier.  We won't need it for handling the file upload, and 
it even raise a problem: we already consumed those "data" events and formidable 
cannot work.  We must also remove the request.setEncoding line.

The formidable module will handle the details of saving the uploaded file to the 
local /tmp directory.  Our function for handling the file upload:

function upload(response, request) {
  var form = new formidable.IncomingForm();
  form.parse(request, function(error, fields, files) {
    fs.rename(files.upload.path, '/tmp/test.png', function(err) {
      if (err) {
        fs.unlink('/tmp/test.png');
        fs.rename(files.upload.path, '/tmp/test.png');
      }
    });
    response.writeHead(200, {'Content-Type': 'text/html'});
    response.write("File uploaded");
    response.end();
  });
}

In the above code, we assume that the form only contain one file-upload field.  
We also assume that we have logic in the front-end that only allow PNG file to 
be uploaded.  To keep thing simple, we rename the file to /tmp/test.png, but 
this may be a problem if multiple people upload files at the same time.  Inside 
the if statement, we delete /tmp/test.png file, and perform the rename operation 
again because Windows implementation of Node does not like it when we try to 
rename a file onto the position of an existing file, which is why we need to 
delete the file in case of an error. 

Since Node is all about non-blocking I/O, functions generally return their result 
using callbacks.  The convention used by the node core is to reserve the first 
parameter of any callback for an optional error object.  In other words, if the 
first parameter of any callback is truthy, then it indicates that an error 
happened.  If it is an exception object, then perhaps we can get the exception 
message from there.  If it is a string, then it might be the error message 
itself.  It can also be error codes.  Check with the respective API, module, or 
function for its own documentation.

To start the interactive Node shell:

node

To quit the interactive Node shell, simply press CTRL+C.

The interactive Node shell (also called REPL) is a good place to test simple 
one-liners.  It comes with other great features, and support tab-completion.

Suppose that we have a file named hello.js that contains some Node code.  
We can run this code:

node hello.js

What is Node.js?

Node.js is a server side JavaScript interpreter based on Google's V8 engine. Node is fast because it is event-driven, and it use non-blocking IO.

What are the 3 most important things to remember when using Node?

  • Node is event-driven, callback, asynchronous, non-blocking framework.
  • Node works well for IO-intensive applications but not particularly well for CPU-intensive applications. Node is a great option for applications that wait on I/O and have to handle a lot of concurrent connections.
  • Error handling is paramount. A single unhandled error can crash the entire application.

What companies are using Node?

The fact that these companies are using Node may indicate that Node is stable, but these companies may or may not use Node for everything. So, be practical, know all the aspects of the beasts, and use the right tool for the job.

What is the negative side of using Node.js?

Node.js is fast because it use non-blocking IO with callback. However, we may get into situation known as "callback hell", a situtation where we need to use multiple nested callbacks. For such situation, we can use the the sync-exec package.

Node is ideal for I/O bound applications (or those that wait on user events), but not so great for CPU-heavy applications. Good examples include data-intensive realtime applications (DIRT), single page applications, JSON APIs, and data-streaming applications. See http://www.airpair.com/javascript/node-js-tutorial

When should we use Node and when should we not use Node?

Each Node.js process runs in a single thread and by default it has a memory limit of 512MB on 32-bit systems and 1GB on 64-bit systems. Although the memory limit can be bumped to ~1GB on 32-bit systems and ~1.7GB on 64-bit systems, both memory and processing power can still become bottlenecks for various processes. See clustering

How can we tell if node is installed?

Open a terminal and:

node [press enter]
1 + 1 [press enter]

How can we launch the node shell?

Open a terminal and:

node [press enter]

How can we exit the node shell?

Press CTRL+C twice.

How can we use the Node REPL (read–eval–print loop)?

node
> console.log('Node is running');
Node is running
> .help
.break Sometimes you get stuck, this gets you out
.clear Alias for .break
.exit  Exit the repl
.help  Show repl options
.load  Load JS from a file into the REPL session
.save  Save all evaluated commands in this REPL session to a file
> .exit

What is NVM?

Node Version Manager. It is a software that allow us to have multiple version of Node.

What is NPM?

Node Package Manager. This is different from NVM.

What is the key difference between NPM and Ruby Gem?

One of the differentiating factors between NPM and GEM is that NPM installs all packages in the local node_modules folder by default, which means you don’t have to worry about handling multiple versions of the same package or accidental cross project pollution. The modules folder is a core Node concept that allows programs to load installed packages and I encourage your to read and understand the manual on this subject.

Why do Node use CommonJS?

The nice thing about requiring Node modules is that they aren't automatically injected into the global scope, but instead you just assigned them to a variable of your choice. That means that you don't have to care about two or more modules that have functions with the same name (this 'require' thing is part of CommonJS specification).

How can we create a module?

To create a module, we create a file name something.js:

function doSomething()  {
}

exports.doSomething = doSomething;

Node follows the CommonJS specification. Functions inside this file are not a global functions. To use our module, we do:

var something = require("./something"); // load something.js
something.doSomething();

When creating your own modules, all you have to do is take care when exporting something (wheather it's a function, an object, a number or so on). The first approach would be to export a single object:

var person = { name: 'John', age: 20 }; 
module.exports = person;

The second approach requires adding properties to the exports object:

exports.name = 'John';
exports.age = 20;

Do modules share scope?

No. A thing to note about modules is that they don't share scope, so if you want to share a variable between different modules, you must include it into a separate module that is then required by the other modules. Another interesting thing you should remember is that modules are only loaded once, and after that they are cached by Node.

How can we share variable between modules?

If you want to share a variable between different modules, you must include it into a separate module that is then required by the other modules.

How many time will Node load a module?

Modules are only loaded once, and after that they are cached by Node. See https://www.airpair.com/javascript/node-js-tutorial

What are the global objects in Node?

Unlike the browsers, Node does not have a window global object, but instead has two others: globals and process. However, you should seriously avoid adding properties on the two. See http://www.airpair.com/javascript/node-js-tutorial

How can we implement server, router, and request handlers?

server.js:

var http = require("http");
var url = require("url");

function start(route, map) {
    function onRequest(request, response) {
        var pathname = url.parse(request.url).pathname;
        route(map, pathname, request, response);
    }
    http.createServer(onRequest).listen(80);
    console.log("Server has started successfully.");
}

exports.start = start;

requestHandler.js:

var exec = require("child_process").exec;

function home(request, response) {
    exec("ls -lah", function (error, stdout, stderr) {
        response.writeHead(200, {"Content-Type": "text/plain"});
        response.write(stdout);
        response.end();
    });
}
function upload() {
    response.writeHead(200, {"Content-Type": "text/plain"});
    response.write("Hello Upload");
    response.end();
}
exports.home = home;
exports.upload = upload;

router.js:

function route(map, pathname, request, response) {
    if (typeof map[pathname] === 'function') {
        handle[pathname](request, response);
    } else {
        console.log("No request handler found for " + pathname);
        response.writeHead(404, {"Content-Type": "text/plain"});
        response.write("404 Not found");
        response.end();
    }
}

exports.route = route;

index.js:

var server = require("./server");
var router = require("./router");
var requestHandlers = require("./requestHandlers");

var map = {};
map["/"] = requestHandler.home
map["/home"] = requestHandler.home;
map["/upload"] = requestHandler.upload;

server.start(router.route, map);

As you can see, in index.js we define the map object which is used to map the path to appropriate handler. router.route is a function, and we pass this function to server.start, along with the map object. Inside the server.start method, we invoke the route function, and pass it the map object and the pathname of the request.

How can we obtain the POST content?

Node.js give us data (probably as they arrive on the network stack) in small chunks using callbacks that are called upon certain events. These events are "data" (a new chunk of POST data arrives) and "end" (all chunks have been received). We need to tell Node.js which functions to call when these events occur. This is done by adding listeners to the request object that is passed to our onRequest callback whenever an HTTP request is received:

request.addListener("data", function (chunk) {
});
request.addListener("end", function () {
});

It is the HTTP server job to give the application all the data it needs from the request. Therefore, we should add it in server.js:

var http = require("http");
var url = require("url");

function start(route, map) {
    function onRequest(request, response) {
        var postData = "";
        var pathname = url.parse(request.url).pathname;

        request.setEncoding("utf8");
        request.addListener("data", function(postDataChunk) {
            postData += postDataChunk;
        });
        request.addListener("end", function() {
            route(map, pathname, request, response, postData);
        });
    }
    http.createServer(onRequest).listen(80);
    console.log("Server has started.");
}
exports.start = start;

First, we defined that we expect the encoding of the received data to be UTF-8. We added an event handler for the "data" event which step by step fills our postData variable whenever a new chunk of POST data arrives, and we moved the call to our router into the "end" event callback to make sure that it is only called when all POST data is gathered. We also pass the POST data into the router because our handlers need it.

How can we parse post data?

The querystring module can handle post data. Here is how we can access the field named text:

var querystring = require('querystring');

querystring.parse(postData).text;

What is EventEmitter?

The EventEmitter pattern allows implementors to emit an event to which the consumers can subscribe if they are interested. This pattern may be familiar to you from the browser, where it is used for attaching DOM event handlers. Node has an EventEmitter class in core which we can use to make our own EventEmitter objects. Let's create a MemoryWatcher class that inherits from EventEmitter and emits two types of events:

var EventEmitter = require('events').EventEmitter; 
var util = require('util'); 

function MemoryWatcher(opts) { 
    if (!(this instanceof MemoryWatcher)) { 
        return new MemoryWatcher(); 
    } 

    opts = opts || { 
        frequency: 30000 // 30 seconds 
    }; 

    EventEmitter.call(this); 

    var that = this; 

    setInterval(function() { 
        var bytes = process.memoryUsage().rss; 
        if (opts.maxBytes && bytes > opts.maxBytes) { 
            that.emit('error', new Error('Memory exceeded ' + opts.maxBytes + ' bytes')); 
        } else { 
            that.emit('data', bytes); 
        } 
    }, opts.frequency); 
} 

util.inherits(MemoryWatcher, EventEmitter); 

Using it is very simple: 

<!-- code lang=javascript linenums=true --> 

var mem = new MemoryWatcher({ 
    maxBytes: 12455936, 
    frequency: 5000 
}); 

mem.on('data', function(bytes) { 
    console.log(bytes); 
}) 

mem.on('error', function(err) { 
    throw err; 
});

See https://www.airpair.com/javascript/node-js-tutorial

An easier way to create EventEmitter objects is to make new instances from the raw EventEmitter class:

var EventEmitter = require('events').EventEmitter; 
var emitter = new EventEmitter(); 
setInterval(function() { 
    console.log(process.memoryUsage().rss); 
}, 30000);

See https://www.airpair.com/javascript/node-js-tutorial

What is Stream?

Streams represent an abstract interface for asynchronously manipulating a continuous flow of data. They are similar to Unix pipes and can be classified into five types: readable, writable, transform, duplex and "classic".

As with Unix pipes, Node streams implement a composition operator called .pipe(). The main benefits of using streams are that you don't have to buffer the whole data into memory and they're easily composable.

To have a better understanding of how streams work we will create an application that reads a file, encrypts it using the AES-256 algorithm and then compresses it using gzip. All of this using streams, which means that for each chunk read it will encrypt and compress it.

var crypto = require('crypto'); 
var fs = require('fs'); 
var zlib = require('zlib'); 

var password = new Buffer(process.env.PASS || 'password'); 
var encryptStream = crypto.createCipher('aes-256-cbc', password); 

var gzip = zlib.createGzip(); 
var readStream = fs.createReadStream(**filename); // current file 
var writeStream = fs.createWriteStream(**dirname + '/out.gz'); 

readStream // reads current file 
    .pipe(encryptStream) // encrypts 
    .pipe(gzip) // compresses 
    .pipe(writeStream) // writes to out file 
    .on('finish', function () { // all done 
        console.log('done'); 
    });

See https://www.airpair.com/javascript/node-js-tutorial

Here we take a readable stream, pipe it into an encryption stream, then pipe that into a gzip compression stream and finally pipe it into a write stream (writing the content to disk). The encryption and compression streams are transform streams, which represent duplex streams where the output is in some way computed from the input.

After running that example we should see a file called out.gz. Now it's time to implement the reverse, which is decrypting the file and outputting the content to the terminal:

var crypto = require('crypto'); 
var fs = require('fs'); 
var zlib = require('zlib'); 

var password = new Buffer(process.env.PASS || 'password'); 
var decryptStream = crypto.createDecipher('aes-256-cbc', password); 
var gzip = zlib.createGunzip(); 

var readStream = fs.createReadStream(__dirname + '/out.gz'); 

readStream // reads current file 
    .pipe(gzip) // uncompresses 
    .pipe(decryptStream) // decrypts 
    .pipe(process.stdout) // writes to terminal 
    .on('finish', function () { // finished 
        console.log('done'); 
    });

See https://www.airpair.com/javascript/node-js-tutorial

How can we read a file and display it using stream?

var fs = require('fs');
fs.createReadStream('./data/customers.csv').pipe(process.stdout);

What is the importance of error handling in Node?

Error handling is one of the most important topics in Node. If you ignore errors or deal with them improperly, your entire application might crash or be left in an inconsistent state. See https://www.airpair.com/javascript/node-js-tutorial

What is Error-first callbacks?

The "error-first" callback is a standard protocol for Node callbacks. It originated in Node core, but it has spread into userland as well to become today's standard. This is a very simple convention, with basically one rule: the first argument for the callback function should be the error object.

That means that there are two possible scenarios:

  • If the error argument is null, then the operation was successful.
  • If the error argument is set, then an error occured and you need to handle it.

Let's take a look at how we read a file's content with Node:

fs.readFile('/foo.txt', function(err, data) { 
    // ... 
});

The callback for ‘fs.readFile has two arguments: the error and the file content. Now let’s implement a similar function that reads the content of multiple files, passed as an array argument. The signature for the function should look similar, but instead of passing a single file path we will pass in an array this time:

readFiles(filesArray, callback);

We will respect the error-first pattern and won't handle the error in the readFiles function, but will delegate that responsibility to the callback. The readFiles function will loop over the file paths and read the content for each. If it encounters an error, it will invoke the callback only once. After it's finished reading the content for the last file in the array it will invoke the array with null as the first argument.

var fs = require('fs'); 

function readFiles(files, callback) { 
    var filesLeft = files.length; 
    var contents = {}; 
    var error = null; 

    var processContent = function(filePath) { 
        return function(err, data) { 
            // an error was previously encountered and the callback was invoked 
            if (error !== null) { return; } 

            // an error happen while trying to read the file, so invoke the callback 
            if (err) { 
                error = err; 
                return callback(err); 
            } 

            contents[filePath] = data; 

            // after the last file read was executed, invoke the callback 
            if (!--filesLeft) { 
                callback(null, contents); 
            } 
        }; 
    }; 

    files.forEach(function(filePath) { 
        fs.readFile(filePath, processContent(filePath)); 
    }); 
}

See https://www.airpair.com/javascript/node-js-tutorial

How can we handle EventEmitter errors?

We have to be careful when dealing with event emitters (that means streams too), because if there's an unhandled error event it will crash our application. Here is the most simple example of such an event, triggered by ourselves:

var EventEmitter = require('events').EventEmitter; 
var emitter = new EventEmitter(); 

emitter.emit('error', new Error('something bad happened'));

Depending on your application this might be a fatal error (unrecoverable) or an error that should not crash your application (like failed sending an email for example). Either way you should attach an error event handler:

emitter.on('error', function(err) { 
    console.error('something went wrong with the ee:' + err.message); 
});

What is the purpose of verror?

There are a lot of situations where we'll want to delegate the error to the callback and don't deal with the error ourselves. In fact, that's exactly what we did with the readFiles function we created earlier. In case there's an error reading the file we will just delegate that to the callback.

Let's try to call the function with a non-existent file and see what happens:

readFiles(['non-existing-file'], function(err, contents) { 
    if (err) { throw err; } 
    console.log(contents); 
});

The output should be something like the following:

node readFiles.js 

/Users/alexandruvladutu/www/airpair-article/examples/readFiles.js:34 
    if (err) { throw err; } 
Error: ENOENT, open '/Users/alexandruvladutu/www/airpair-article/examples/non-existing-file'

That's not super helpful, especially because in real-world situations there will probably be a function that calls another function that calls the original function. For example, you might have another function called readMarkdownFiles that will only read markdown files using the ‘readFiles function. Also, the output above doesn’t even provide a useful stack trace, so you would have to dig deeper to find out where exactly the error came from. Luckily we can do something about that by integrating the verror module into our application. With verror, we can wrap our errors to provide more descriptive messages. We will have to require the module at the beginning of the file and then wrap the error when invoking the callback:

var verror = require('verror'); 

function readFiles(files, callback) { 
    ... 
    return callback(new VError(err, 'failed to read file %s', filePath)); 
    ... 
}

And there it is! Instead of having to search Google for 'ENOENT' and digging through the code, now we know that there was a problem reading the file and that came from the `readFiles function.

This is a simple example but it shows the power of verror. In production this module will be a lot more useful because the codebase will probably be large and the error will be propagated through more functions than in our basic example.

How can we debug Node application using the debugger keyword?

Put debugger statements in your code and use:

node debug program.js

See http://nodejs.org/api/debugger.html

How can we debug Node application using node-inspector?

It has a lot of goodies baked in, but the most important are:

  • It has the ability to setup breakpoints.
  • We can step over, step in, step out, resume (continue).
  • We can inspect scopes, variables, object properties.
  • Besides inspecting, we can also edit variables and object properties.
  • It's based on the Blink Developer Tools, so it should look and feel familiar to frontend developers.

node-inspector is installable via npm:

npm install -g node-inspector

Let's say we have the following basic Node example:

var port = process.env.PORT || 1337; 

http.createServer(function(req, res) { 
    res.writeHead(200, { 'Content-Type': 'text/html' }); 
    res.end(new Date() + '\n'); 
}).listen(port); 

console.log('Server running on port %s', port);

To run our example with node-inspector we just need to type in the following command:

node-debug example.js

That should start our application and open the node-inspector interface in Chrome. Let's setup a breakpoint in the request handlers (by clicking on the line number for the one containing res.writeHead). Now open another tab and visit http://localhost:1337. The browser should be in a loading stage, but switch to the node-inspector interface.

At this point the Chrome browser address bar show http://localhost:8080/debug?port=5858. See https://www.airpair.com/javascript/node-js-tutorial

If you open the console you can inspect the request and response objects, modify them and so on. This is just a basic example to get you started with node-inspector, but in real-world applications you will stil benefit from these debugging techniques to track down more complicated issues.

How can we execute a program?

node program.js

How can we execute a quick JavaScript statement from command-line?

node -e "console.log(new Date());"

How can we access process information?

Developers can access useful process information in code with process object:

console.log(process.pid);

How can we access the global object?

For browsers, JavaScript by default puts everything into its global scope. This was coined as one of the bad part of JavaScript in Douglas Crockford’s famous [JavaScript: The Good Parts]. Node.js was designed to behave differently with everything being local by default. In case we need to access globals, there is a global object. Likewise, when we need to export something, we should do so explicitly.

In a sense, window object from front-end/browser JavaScript metamorphosed into a combination of global and process objects. Needless to say, the document object that represent DOM of the webpage is nonexistent in Node.js.

What is Buffer?

Buffer is a Node.js addition to four primitives (boolean, string, number and RegExp) and all-encompassing objects (array and functions are also objects) in front-end JavaScript. We can think of buffers as extremely efficient data stores. In fact, Node.js will try to use buffers any time it can.

What is __dirname?

__dirname is an absolute path to the file in which this global variable was called, while process.cwd is an absolute path to the process that runs this script. The latter might not be the same as the former if we started the program from a different folder

What is process.cwd?

__dirname is an absolute path to the file in which this global variable was called, while process.cwd is an absolute path to the process that runs this script. The latter might not be the same as the former if we started the program from a different folder.

What is the purpose of the node_modules folder?

We need either the package.json file or the node_modules folder, in order to install modules locally.

The best thing about NPM is that it keep all the dependencies local, so if module A uses module B v1.3 and module C uses module B v2.0 (with breaking changes comparing to v1.3), both A and C will have their own localized copies of different versions of B.

Why is it a best practice not to include the node_modules folder into the version control system?

The best practice is not to include a node_modules folder into Git repository when the project is a module that supposed to be use in other application. However, it’s recommended to include node_modules for deployable applications. This prevents a breakage caused by unfortunate dependency update.

How can we avoid the callback hell?

Callback code can be re-write with the use of event emitters, promises or by utilizing the async library.

What can we do with the package.json file?

To help you manage project dependencies Node introduces package.json as a core concept. On the surface it works similar to Gemfile in Ruby and contains list modules that your project depends on. package.json is Gemfile for Node. In reality, package.json is a very powerful tool that can be used to run hook script, publish author information, add custom settings and so on. Because package.json is just a JSON file, any property that isn’t understood by Node or NPM is ignored and could be used for your own needs. If you are in a folder with package.json and want to install all packages it lists, simply type:

npm install

This is equivalent to bundle install in the Ruby world.

Can Node use web socket and server-sent events?

Yes.

Why is it a best practice to explicitly not include the node_modules folder in the source repository?

The node_modules folder is used to store various modules that we need. However, we do not really need the content of the node_modules folder because all the information is already contained in the package.json file and therefore these module can be easily installed with:

npm install

Having the content of the node_modules folder stored in your source code repository just make the repository bigger. The best practice is not to include a node_modules folder into Git repository when the project is a module that is supposed to be use in other application. However, it’s recommended to include node_modules for deployable applications. This prevents a breakage caused by unfortunate dependency update.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License