This lab walks you through Creating a Streaming Data Pipeline with Cloud Pub/Sub
You have created a Dataflow Job.
You have processed the data into the BigQuery table.
Duration: 60 minutes
Simply, it means the data is processed whenever any data is received in the pipeline without any manual intervention. Here you will publish the data using Cloud Pub/Sub, then using Dataflow job, the data is processed and stored into BigQuery Table. The flow where the data will be processed is created in the Dataflow job.
Creating a Cloud Pub/Sub Topic.
Creating a BigQuery Dataset and Table.
Creating a Dataflow Job.
Publishing the Data.
Checking the published Data