Your company sends all Google Cloud logs to Cloud Logging.
Your security team wants to monitor the logs.
You want to ensure that the security team can react quickly if an anomaly such as an unwanted firewall change or server breach is detected.
You want to follow Google-recommended practices.
What should you do?
Click on the arrows to vote for the correct answer
A. B. C. D.C.
To monitor logs for anomalies and ensure a quick reaction, Google recommends exporting logs to Cloud Logging and using one of the following services: Pub/Sub, BigQuery, Cloud Storage, or Cloud Functions.
Option A: Scheduling a cron job with Cloud Scheduler to query logs every minute for relevant events is not recommended, as it is not a scalable solution for monitoring logs in real-time. Also, a cron job can only trigger a script or a HTTP request, and it doesn't have access to Cloud Logging data.
Option B: Exporting logs to BigQuery and triggering a query to process the log data for relevant events is a recommended practice. BigQuery has a powerful query language that can be used to extract useful information from logs. With the help of scheduled queries, alerts can be generated for specific events in real-time. BigQuery also provides real-time streaming of logs, so that the security team can monitor events as they occur.
Option C: Exporting logs to a Pub/Sub topic and triggering Cloud Functions with relevant log events is a recommended practice. Pub/Sub is a scalable messaging system that can handle large volumes of messages in real-time. Cloud Functions can be triggered to process log events and generate alerts or notifications based on the severity of the event.
Option D: Exporting logs to a Cloud Storage bucket and triggering Cloud Run with relevant log events is not a recommended practice. While Cloud Storage is a durable and cost-effective solution for storing logs, it does not provide real-time streaming of logs. Also, Cloud Run is not designed to handle high-volume, real-time events like those generated by logging systems.
Therefore, the best option for this scenario is option B: Export logs to BigQuery and trigger a query in BigQuery to process the log data for relevant events.