Data Lifecycle Management
What is Data Lifecycle Management?
Data lifecycle management (DLM) is the policy-driven approach to managing data from its point of origin to its eventual deletion. Today’s enterprises generate information at a phenomenal pace—more than doubling in volume every two years. Making meaningful use of that data requires a deliberate, directed process for gathering, maintaining, protecting, and applying it. Effective DLM provides structure and organization for business information, ensuring that it supports business objectives, rather than just consuming space.
What are the stages of data lifecycle management?
Between the smart phone, the cloud, the edge, and the Internet of Things (IoT), we generate data faster than we can find a use for it. Purposeful managed data must have a clearly defined lifecycle, with functional stages governed by policies that enable businesses to access and utilize it effectively. The stages of data lifecycle management may vary from one organization to another, but most will fit into the following general framework.
Stage 1: Create and Collect
The data lifecycle begins when the data is created. Data sources are abundant, but not every detail is worth recording. Before you begin to capture data, it pays to have a clear understanding of its potential value and relevance to your business. Establish rules for collecting data in a way that will preserve its usefulness, indicating when, where, how, and why it was generated.
Stage 2: Store and Manage
Data must be stored and maintained in a stable environment appropriate for its origins, potential applications, and business priorities. Any data worth collecting is worth protecting, requiring policies for reliability, redundancy, and disaster recovery. Sensitive information may need to be encrypted for security, or to comply with government and industry regulations.
Stage 3: Use and Share
Data is only valuable if it can be made available to authorized users for legitimate business purposes. Users must be able to locate, access, modify, and create data as needed. Policies must be established to determine which users are authorized, and when and how information can be used.
Stage 4: Archive
At some point, data ceases to be significant for day-to-day applications and workflows, but still retains enough value that it may be relevant or required in the future. It still needs to be organized and protected, but immediate accessibility becomes less critical. Examples are records that must be kept for legal or regulatory purposes. Inactive data can be archived in a variety of mediums, on and off network, and returned to active status if necessary.
Stage 5: Destroy
With few exceptions, data should not be retained indefinitely. Enterprises continuously generate enormous volumes of data, and the cost of data storage is not insignificant. Before the expense of storing old data exceeds its probable value, it’s time to purge it from databases and archives. Just as it was important to decide what data should be captured in the first place, it’s important to recognize when it reaches the end of its useful life.
HPE and data lifecycle management
HPE has a wide variety of solutions and services available to help organizations plan and implement effective DLM strategies. HPE GreenLake for Storage provides a comprehensive suite of data management services to support applications and business information across edge, core, and cloud operations.
- HPE GreenLake for Block Storage offers simple quoting and ordering plus intent-based provisioning to meet any service-level agreement with self-service agility, accelerating development for new apps, services, and initiatives.
- HPE DataOps Management can deploy new data infrastructure on demand in minutes. New systems are automatically discovered and easily configured, and administrators can manage and monitor cloud-native infrastructure from practically any device.
- HPE GreenLake for HCI builds self-service clouds on demand across edge, cloud, and on-premises environments with cloud-based management and self-service agility.
- HPE Backup and Recovery Service is designed to modernize and protect data operations across clouds, coordinating snapshots for rapid restores, recoveries, and cloud backups through a single pane of glass.
- HPE InfoSight leverages advanced AI to provide self-managing, self-healing, self-optimizing AI Ops from edge to cloud, ensuring that your apps are always on and always fast.
- HPE CloudPhysics simplifies workload and infrastructure planning from edge to cloud with continuous monitoring and instant data-driven analysis across heterogenous systems, maximizing time-to-value and return on investment.