root@prasanna-desktop:~#

Function as a Service using V8 JavaScript engine's built-in multi-tenancy and isolation

One of the most underrated and least known features of Google’s v8 javascript engine is it’s ability to create and run untrusted code in a secure and isolated sandbox called Isolate. A single instance of V8 can create and run hundreds of such isolates provided enough memory and compute, while this feature is useful for web browsers (when we open a new tab in the web browser a new V8 isolate is created for that tab - this ensures isolation and security across multiple browser tabs), the same can also be used at the server side to run JavaScript compute instances inside thin pseudo-virtual instances powered by V8 isolates. This can be useful in places where people are ready to compromise on the security offered Virtual Machines and Containers in order to improve the throughput (number of functions executed per second). V8 isolates offer isolation between running JavaScript instances - thus each function that is invoked in a isolate has it’s own heap, execution stack, garbage collector and separate contexts of execution, which offers thin level of security between multiple tenants.

V8 isolates

As stated earlier, V8 isolates are like light-weight virtual instances orchestrated by V8 engine, each isolate has it’s own Heap allocator, heap memory area, execution stack (handle scope), garbage collector and each V8 isolate is bounded to a thread so only one thread can access the isolate at a time - these ingredients provide all the necessary requirements to execute JavaScript functions completely isolated from each other, it is also possible to have multiple object groups within the same isolate called Contexts. Here is the definition of isolates as per Cloudflare:

V8 orchestrates isolates: lightweight contexts that group variables with the code allowed to mutate them. You could even consider an isolate a “sandbox” for your function to run in.

A single runtime can run hundreds or thousands of isolates, seamlessly switching between them. Each isolate’s memory is completely isolated, so each piece of code is protected from other untrusted or user-written code on the runtime. Isolates are also designed to start very quickly. Instead of creating a virtual machine for each function, an isolate is created within an existing environment. This model eliminates the cold starts of the virtual machine model.

Why isolates are faster when compared to VMs, Containers and processes for FaaS?

In FaaS model, each function is usually invoked in a separate “isolated executor” - to provide security, here the “isolated executor” can be either a Virtual machine, a container or a process.

  1. VMs offer highest degree of isolation - a VM runs it’s own Operating system and all the interactions with hardware and host operating system are controlled via a secure hypervisor, despite providing high security VMs are slow to spin up and we cannot put lot of VMs on the same hardware because of the memory requirements involved in running it’s own compute stack from the ground up.
  2. Containers offer process group level isolation on top of the same operating system, each container has it’s own root file system, virtual network IP and it is also possible for containers on the same host to connect together via a virtual network overlay. Containers can also be assigned dedicated CPU, memory quotas and fine-grained access to other hardware resources using cgroups. When compared to VMs, containers are faster to create and destroy but offers less security (as all containers use the same Operating system, if a malicious code is able to escape the container, it can take control of host operating system and affect other containers). While containers are good solution for FaaS and has worked out well in many cases, it is still a overhead for high velocity FaaS workloads because creating containers require tightly coupled coordination with operating system.
  3. Process is a basic level of task as seen by operating system, each process has it’s own virtual address space, file-descriptors and threads - while isolation still exists, it is not as strong as containers because processes share the same host file-system and has unbounded access to all the host hardware resources. When compared to containers, processes are much faster to create and destroy, but after a certain limit, forking of new processes and context switches involved while scheduling these processes becomes a bottleneck.
  4. Isolates: As mentioned previously, isolates are virtual units of execution inside the same process, each bounded to a specific thread of execution. Isolates are built on top of raw threads that share the same heap memory at the process level, but the isolation is virtually created by using high-level constructs that are required to execute JavaScript code instances separately. Unlike processes, isolates are created and managed in the user-space by the V8 instance which runs as a process, this greatly reduces the forking overhead, the overhead of context switching is also less because it is faster to context-switch threads than processes.

To conclude, scalability is inversely proportional to the level of isolation required. The level of isolation required depends on the type of workloads executed by the FaaS platform.

When isolates are useful?

From the previous section, it is clear that we need to make a trade-off between isolation and performance, so isolates can be useful where performance matters and the workloads doesn’t demand much isolation. Isolates can be used when:

  1. Functions are very short-lived, event-driven and real-time response is important. (Because isolates have very less cold-start time)
  2. Large number of functions needs to be served with minimum memory (Because isolates consume lot less memory)
  3. Functions don’t make system calls and just manipulate data - in other words they take input and return the output without depending on I/O or network. (If you care about security, this might be an important requirement because isolates don’t offer any protection at operating system level, however this can be mitigated through some measures at the platform level)

It is also possible to use VMs, containers and isolates together. For example -

  1. On a 64 vCPU machine, create two VMs - each with 32 vCPUs.
  2. Each VM runs 4 containers.
  3. Each container runs a process.
  4. Process has a V8 instance embedded inside an application that can take FaaS requests, execute functions and return output by spinning up an isolate for each request.

Trying out JavaScript V8 isolates with Rust:

I will now try out creating and using V8 Isolates in Rust, I will create a simple program that spawns N v8 isolates, each in a thread and executes a basic JavaScript code that just concatenates two strings. The official implementation of V8 is written in C++ as a part of Chromium project by Google and community, so I will be using the rust wrapper crate called rusty_v8 written as a part of Deno project. I use cargo package manager to spin up the project:

cargo init rusty-js

Next I will add v8 crate as one of my project dependencies, my Cargo.toml file looks like this:

[package]
name = "rusty-js"
version = "0.1.0"
edition = "2021"

[dependencies]
v8 = {version = "0.42.0"}

I can now import the v8 crate as follows:

extern crate v8;

As stated earlier, I am creating a thread per request to execute the function inside the isolate and return it’s result. For this I am creating a function called isolate_executor which initializes a new isolate, creates all necessary contexts, handles and scopes to compile and execute the sample script. You can read more about context, handle and scope here. Most of the code here is taken as it is from hello-world example of rusty-v8. Basically this is how my task function looks like:

fn isolate_executor(code: &str) -> String {
    // Create a new Isolate and make it the current one.
    let isolate = &mut v8::Isolate::new(v8::CreateParams::default());
    // Create a stack-allocated handle scope.
    let handle_scope = &mut v8::HandleScope::new(isolate);
    // Create a new context.
    let context = v8::Context::new(handle_scope);
    // Enter the context for compiling and running the hello world script.
    let scope = &mut v8::ContextScope::new(handle_scope, context);

    let v8_code_string = v8::String::new(scope, &code).unwrap();

    // compile and run:
    let script = v8::Script::compile(scope, v8_code_string, None).unwrap();
    // Run the script to get the result.
    let result = script.run(scope).unwrap();

    result.to_string(scope).unwrap().to_rust_string_lossy(scope)
}

Now I will create a function that launches isolate_executor in a thread and returns it’s handle back to the caller so I can wait for it’s output using join():

fn execute_in_isolate(code: &'static str) -> thread::JoinHandle<String> {
    thread::spawn(|| isolate_executor(code))
}

Now I will create the main function which creates N execution isolates by calling execute_in_isolate N times by passing the sample JavaScript code snippet:

fn main() {

    let n = env::args()
        .last()
        .unwrap()
        .parse::<usize>()
        .unwrap();

    let code = "const concat = (a, b) => { return a + b }\nconcat('Hello', 'World')";

    let platform = v8::new_default_platform(0, false).make_shared();
    v8::V8::initialize_platform(platform);
    v8::V8::initialize();

    {
        let mut handles: Vec<thread::JoinHandle<String>> = Vec::new();

        // execute n isolates
        for _ in 0..n {
            handles.push(execute_in_isolate(code));
        }

        for i in 0..n {
            let result = handles.pop().unwrap().join();
            if result.is_err() {
                println!("id={}, error={:?}", i, result.unwrap_err());
            } else {
                println!("id={}, error={}", i, result.unwrap())
            }
        }
    }

    unsafe {
        v8::V8::dispose();
    }
    v8::V8::dispose_platform();
}

This is just a code I wrote for demonstration, so lot of errors are not handled properly. I can now build the binary using cargo:

cargo build

Once the build is successful, the binary will be created in target/debug/ directory (since I created a debug build). I will run this binary by passing N=200, which means I am creating 200 threads to run the sample program in 200 different isolates. I will use time command to measure the time taken for completing 200 concurrent isolate executions.

time ./target/debug/rusty-js 200

Here is the output:

......
id=189, error=HelloWorld
id=190, error=HelloWorld
id=191, error=HelloWorld
id=192, error=HelloWorld
id=193, error=HelloWorld
id=194, error=HelloWorld
id=195, error=HelloWorld
id=196, error=HelloWorld
id=197, error=HelloWorld
id=198, error=HelloWorld
id=199, error=HelloWorld

real    0m0.513s
user    0m0.749s
sys     0m0.385s

I am not doing any benchmarks here, so take these time values as rudimentary - it took around 0.513s to spin up and execute my sample JavaScript code across 200 isolates. Here is the final code (in case you are interested):

extern crate v8;

use std::env;
use std::thread;


fn isolate_executor(code: &str) -> String {
    // Create a new Isolate and make it the current one.
    let isolate = &mut v8::Isolate::new(v8::CreateParams::default());
    // Create a stack-allocated handle scope.
    let handle_scope = &mut v8::HandleScope::new(isolate);
    // Create a new context.
    let context = v8::Context::new(handle_scope);
    // Enter the context for compiling and running the hello world script.
    let scope = &mut v8::ContextScope::new(handle_scope, context);

    let v8_code_string = v8::String::new(scope, &code).unwrap();

    // compile and run:
    let script = v8::Script::compile(scope, v8_code_string, None).unwrap();
    // Run the script to get the result.
    let result = script.run(scope).unwrap();

    result.to_string(scope).unwrap().to_rust_string_lossy(scope)
}

fn execute_in_isolate(code: &'static str) -> thread::JoinHandle<String> {
    thread::spawn(|| isolate_executor(code))
}

fn main() {

    let n = env::args()
        .last()
        .unwrap()
        .parse::<usize>()
        .unwrap();

    let code = "const concat = (a, b) => { return a + b }\nconcat('Hello', 'World')";

    let platform = v8::new_default_platform(0, false).make_shared();
    v8::V8::initialize_platform(platform);
    v8::V8::initialize();

    {
        let mut handles: Vec<thread::JoinHandle<String>> = Vec::new();

        // execute n isolates
        for _ in 0..n {
            handles.push(execute_in_isolate(code));
        }

        for i in 0..n {
            let result = handles.pop().unwrap().join();
            if result.is_err() {
                println!("id={}, error={:?}", i, result.unwrap_err());
            } else {
                println!("id={}, error={}", i, result.unwrap())
            }
        }
    }

    unsafe {
        v8::V8::dispose();
    }
    v8::V8::dispose_platform();
}

We can also extend this example by creating a simple web server exposing gRPC or RESTful interface using which remote clients can connect and execute JavaScript code - in this case we will end up creating a very rudimentary FaaS platform. We can follow the steps below to create our own basic JavaScript FaaS platform that exploits the performance offered by V8 isolates:

  1. Create a centralized object storage registry that stores JavaScript code.
  2. Create and structure a database that stores necessary user credentials and associates the JavaScript code stored in object storage with users.
  3. Create a web service and expose APIs for users to register and upload their JavaScript code - a unique name will be given to the code they upload.
  4. Create a web-worker that takes this unique code, downloads the code and creates a V8 isolate to execute this code and return the output upon user’s request via a REST or gRPC API call.
  5. To avoid downloading code every time, we can have a simple cache in the worker to save the code it downloaded for subsequent runs (or we can also use a redis cache to make our cache distributed across workers).
  6. Use a container orchestrator to deploy and manage our web-workers across multiple machines.
  7. As an improvement, we can also use load balancers and auto-scalers to balance the load and scale our deployment. We can also write a system that leases API keys and manages quotas.

References:

  1. Deno
  2. Ryan Dhal’s post on JavaScript containers
  3. Cloudflare blog
  4. Cloudflare blog
  5. v8 isolates in go