How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Cloud Native Ecosystem / Software Development

POSH: A Data-Aware Shell for Faster Distributed Text Processing

POSH includes both a shell and an associated distributed runtime, can speed processing of remote data by orders of magnitude by moving the computationally-heavy work to where the data resides.
Jan 19th, 2021 1:24pm by
Featued image for: POSH: A Data-Aware Shell for Faster Distributed Text Processing

The Unix command line offers a rich set of data processing tools, such as cat, grep and awk, for text searching and filtering through large files. But executing these commands on remote data over a campus network, or across a cloud, can bring research to halt, as the data scientist waits for the results to be returned to the command line, or to a local file.

“Shells should consider data locality,” explained Deepti Raghavan, a Stanford University Ph.D. student who is one of the creators of the data-aware The Process Offload Shell (POSH), which she introduced during a presentation at the USENIX SRECon20 Americas conference held virtually last month.

Currently, POSH is a prototype, but the project raises some interesting ideas around the best ways to divide work so that it gets done as quickly as possible, while making it easier for the end-user to execute these tasks. Tests have found a POSH-based approach can offer 1.5 to 15 times data speedup across remote file systems without modifying the data or the standard command line.

POSH includes both a shell and an associated distributed runtime, can speed the processing of remote data by orders of magnitude by moving the computationally-heavy work to where the data resides, such as on an NFS file storage server. Commands are issued locally on the user’s shell but are actually executed on the server with the data, which can greatly expedite processing. Only the output is then shipped back to the command line on the local machines.

Traditional approaches can be a time-sink because they involve moving the data to the client, which can be really slow for large data sets. There have been approaches, like POSH’s, to move the processing to the data. MapReduce and Spark are two examples. But, for the data researcher, they can be cumbersome to use, requiring code to interface with their APIs. “There might be more overhead than it is worth to use these systems,” she said.

The idea is to “run this command closer to the storage without changing the workflow of the developer,” Raghavan said.

POSH offers a shell identical to the canonical Bash shell, but its offloads some of the work that the commands require to proxy servers on or near the data storage. Proxy servers located on these remote storage servers can process the data. “This prevents lots of unnecessary data movement,” Raghavan said.

In order to determine what parts of a workload can be executed remotely, POSH uses a set of annotations and metadata about individual shell commands to best determine where in the shell pipeline to hand off the work to the remote proxy server. In general users shouldn’t have to worry about annotations, though annotations will need to be created for all relevant Unix commands.

This metadata documents the file dependencies of each command, as well as all the options and parameters for each command. In a command involving multiple tools, it needs to understand how much data flows between the commands, and if the command can be parallelized across different servers. The runtime also includes a scheduling algorithm to schedule a workload across multiple servers to achieve an optimal execution time.

When a user types in a command, POSH will generate a Directed Acyclic Graph (DAG) to represent the entire command workflow, which it then can execute:

POSH is best used for I/O-intensive workloads where the data is stored in remote storage, such as NFS.  In one test, the researchers used a combination cat and grep command on 80GB of data across five proxy-equipped different servers. The results returned would only be a minuscule .8KB The test was run across both a cloud setup and a traditional university network. There, the team found a 10x speedup compared against  the university setup and a 2.5 speedup in the cloud setup:

In another case, the team looked at the speed of three git commands (add, commit, status) across a code repository. In this case, the add command returned results 10-15 times as fast as it would through the traditional approach. In the case of the addcommand, git returns the status of each file checked, which, in a traditional setup, leads to a lot of back-and-forth between the shell and the remote file server.

“POSH saves on latency by avoiding many round trips,” Raghavan said.

Read the paper here and watch the video here:

Feature image from Shutterbug75 de Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.