Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
No: TypeScript remains the best language for structuring large enterprise applications.
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
I don’t know and I don’t care.
Data / Storage

Data Lake Security: Dive into the Best Practices

Enjoy the benefits without compromising on security by implementing practices to reduce the risk of noncompliance, data mismanagement or other incidents.
Jan 26th, 2023 7:34am by
Featued image for: Data Lake Security: Dive into the Best Practices

Data is the most valuable resource on earth, and a business’ success scales up with their ability to maximize value from data. That’s why many organizations are turning to data lakes to improve analytics, enable more effective collaboration and support data-driven decision-making at scale.

Different from traditional relational databases, data lakes are capable of ingesting data in its raw form from multiple sources.

While data lakes have the promise to deliver superior business outcomes, their rapid adoption creates a situation where some teams lack the resources and domain expertise to ensure compliance and security controls are in place. Complicating this, a broad set of internal and sometimes external roles are able to use the lake, amplifying potential risks to the business.

To realize the benefits of a data lake without compromising on security, organizations need to follow several best practices to reduce the risk of noncompliance, data mismanagement, data leakage or an otherwise security incident.

From Database to Data Lake

Database technology was introduced in the 1960s as computers became more accessible and organizations sought a solution to efficiently store and manage data. For decades, online transactional processing (OLTP) workloads and relational databases served as the workhorse — delivering rapid, accurate data processing.

Yet by the 1980s, data warehouses transformed data processing from transactional or operational systems to decision-support systems. This shift enabled companies to aggregate data from across multiple environments to gather business intelligence (BI) and support strategic decision-making.

Today almost every organization uses databases, data warehouses and BI to inform innovation and guide strategic decisions. However, with the rise of cloud computing and modern coding languages, the ways in which databases are used is evolving for several reasons:

  • Organizations realize they can get more value out of their data if they don’t apply a predefined schema or limit how it can be used across transactional or analytical systems.
  • Data is used to develop and train machine learning (ML) models for analytics or to modernize existing workloads running on any type of database.
  • Cloud computing allows for the rapid provisioning and modernizing of workloads at a pace and scale that was impossible just a few years ago.

While some businesses remain focused on relational databases or data warehouses, and primarily structured data, data-savvy customers increasingly raise an eyebrow at over focusing here.

Data warehouses work exceptionally well at processing and analyzing structured data, but they’re unable to capture raw and unstructured data, a severe limitation for digital businesses. As a result, nonrelational databases, such as data lakes, are growing in popularity, with some data architects now defaulting to data lakes for both new workloads and to modernize existing ones.

Why You Should Consider a Data Lake

Increasingly, organizations are starting their data life cycle in a data lake because they gain immediate value and can use it to build ML models, perform ad-hoc analytics queries, feed countless analytics systems and more.

Traditionally, data warehouses have been used to regularly analyze large amounts of structured data or to produce periodical reports. However, they require businesses to apply a predefined schema to data before processing and storing it, limiting how the data can be used across transactional or analytical systems.

Alternatively, data lakes don’t require the same upfront work. This allows for the integration and storage of data, unconverted or with minimal treatment, as it’s ingested into the data lake from multiple sources, including unstructured log data, internet of things (IoT) sensors, and social media or multimedia content.

This provides three benefits. Users can:

  • Process data as it flows into the data lake in near-real time using stream-processing tools like Apache Kafka.
  • Derive specific insights directly from the data lake using a high-performance query engine like Google BigQuery or Amazon Athena.
  • Process on-demand analytics on large volumes of structured and unstructured data with tools like Elasticsearch to search, filter and visualize data from logs and operational data.

Are Data Lakes Secure?

Data going to a data lake needs to be protected and given the same level, if not more, protection than data stored in a relational database as it serves as the sole repository for a company’s data.

The three key security risks facing data lakes are:

  • Access control: With no database tables and more fluid permissions, access control is more challenging in a data lake. Moreover, permissions are difficult to set up and must be based on specific objects or metadata definitions. Commonly, employees across the company also have access to the lake, which contains personal data or data that falls under compliance regulations. With 58% of security incidents caused by insider threats, according to a commissioned Forrester Consulting study, employee access to sensitive data is a security nightmare if left unchecked.
  • Data protection: Data lakes often serve as a singular repository for an organization’s information, making them a valuable target to attack. Without proper access controls in place, bad actors can gain access and obtain sensitive data from across the company.
  • Governance, privacy, and compliance: Because employees from across the company can feed data into the data lake without inspection, some data may contain privacy and regulatory requirements that other data doesn’t. What’s more, locating and monitoring personal data across data lake storage architecture can be challenging.

Not protecting these gaps could cause organizations to choose between limiting the data they store in a data lake and putting themselves at risk of noncompliance. Or in a worst-case scenario, it could lead to a data leak or security incident.

How to Secure a Data Lake 

Data is the lifeblood of the modern business, and an effective security strategy needs to start with securing it.

To gain visibility and control over a data lake, there are four steps a business should take:

  • Outline a standardized data access process: Used both by human users and integrated systems, the process should enable tracking of access and use of the data.
  • Create a data classification scheme and catalog: Data in the lake should be classified by content, usage scenarios, types and possible user groups with a catalog that enables the search and retrieval of data. There should also be a convenient method to separate the data you want to keep from data you want to delete.
  • Enable data protection: Security controls, data encryption and automatic monitoring must be in place, and alerts should be raised when unauthorized parties access the data or when authorized users perform suspicious activities.
  • Enforce data governance, privacy and compliance: There should be clear policies, communicated to all relevant employees, about how to navigate and make use of the data lake, how to promote data quality and the ethical use of sensitive data. A data lake commonly stores historical data, and that data should be stored in compliance with data privacy standards.

Maximize Data Value While Preventing Security and Privacy Risks

Historically, relational databases were the default storage systems for businesses, but new advancements in data storage, capture and analytics have provided capabilities for extracting value from raw data that was inconceivable only a few years ago.

More organizations are adopting nonrelational databases, like data lakes, thanks to their ability to provide real-time analytics and capture additional data types. However, data lakes present a complex challenge: managing security while maintaining compliance with privacy regulations.

To address the security and compliance risks associated with data lakes, organizations should start by creating an effective and efficient way to classify and discover data across their environment. Next, organizations must be able to identify who is accessing data, when a compromised user accesses sensitive data and prevent data from being stolen by malicious insiders.

While these security best practices serve as a foundational step toward creating a more secure data lake environment, organizations should invest in a holistic data-centric security solution that is designed to protect data no matter where it lives and whatever form it’s in.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Enable, Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.