Big data computing – Outofbox

Big data computing

Home / Big data computing  

Reliable and secure infrastructure for big data

Outofbox is a developer-friendly cloud platform that makes big data accessible to even the smallest of businesses. With managed compute and storage infrastructure, your team can completely control your big data stack, and run workloads reliably, securely, and inexpensively.

Flexible hosting options

We monitor over a billion websites for important events. Outofbox is the perfect platform – essentially the backbone of our company. They provide all the processing power we need.close quote

Building blocks for big data: compute

You’re going to need substantial compute if you want to crunch terabytes or petabytes of data. Outofbox is built with best-in class Intel processors that run your workloads at blazing speeds. With outofbox, you can run your big data jobs directly on VMs or kubernetes.

Boxes

Run and manage your app directly on our VMs, or as we call them, boxes. Choose between basic, general purpose, CPU-optimized, or memory-optimized VMs. Spin up boxes with your choice of linux OS in 55 seconds or less.

Kubernetes

Spin up a managed kubernetes cluster in minutes, and run your app as microservices using docker containers. Scale up or down as needed. Pay only for your worker nodes, as the master is free.

Building blocks for big data: storage

It should be easy and inexpensive to store, scale, and retrieve your data. Outofbox provides infrastructure flexibility so you can build and operate your big data workload with the best-fit storage technology for your use case and technology stack.

 

Spaces (Object storage)

Store vast amounts of data in five global data centers with S3-compatible tools. Cut retrieval times by up to 70% with a built-in CDN that caches data at 25+ points of presence.

Volumes (Block storage)

All boxes feature local SSD for super fast operations. With volumes, you can attach extra highly available and resizable SSD storage as needed.

Framework freedom

After spinning up your infrastructure, you’re free to deploy whatever big data framework is the best fit for your workload. Many outofbox customers utilize Apache Hadoop or Spark.

Apache Hadoop

Apache Hadoop is a processing framework that provides batch processing. Hadoop stores distributed data using the Hadoop Distributed File System (HDFS), and processes data where it is stored using the MapReduce engine.

 

Apache Spark

Apache Spark is a next-generation processing framework with both batch and stream processing capabilities. Spark focuses primarily on speeding up batch processing workloads using full in-memory computation and processing optimization.