Highly accurate, trustworthy, and timely metadata is gold to data catalogs and the business communities they serve. Integrations between DataOps.live and Data Catalogs like data.world are critical for the value they deliver to business communities looking to explore data. These integrations enable DataOps to publish the rich and highly accurate metadata we gather about data pipelines to the business catalog. The value to a customer to have 1-click, zero effort, highly accurate, data cataloguing is immense.
Join this LIVE hands-on technical masterclass session to see how to integrate DataOps.live and data.world to automate an entire Snowflake infrastructure from scratch (including databases, roles, warehouses, stages), and how customers can ingest, transform data, test data, assure data quality, share data, and then build highly accurate up-to-date data catalogs, every time a data pipeline runs, with a one single click.
This session will showcase:
Building data models and data pipelines
Maintaining core metadata about the models within the models: Who is the data steward, is this PII, etc
Running data pipelines to generate rich metadata
Publishing rich metadata to data.world
Results for modeling and transformation
Definitions of the tables / columns
Metadata in the repository – Key value pairs
Exploring these data objects and metadata in data.world
Automating data catalog updates from the metadata as pipelines evolve and change