Calix provides the cloud, software platforms, systems and services required for communications service providers to simplify their businesses, excite their subscribers and grow their value.
This is a remote based position in US.
We, the Cloud Platform Engineering team at Calix are responsible for the Platforms, Tools, and CI/CD pipelines at Calix. Our mission is to enable Calix engineers to accelerate the delivery of world-class products while ensuring the high availability,
We are seeking a skilled and experienced GCP Looker Administrator to join Cloud Platform team. The ideal candidate will be responsible for managing, optimizing, and maintaining our Looker instance hosted on Google Cloud Platform (GCP). This role involves ensuring the smooth operation of Looker, supporting business intelligence (BI) initiatives, and enabling data-driven decision-making across the organization. The GCP Looker Administrator will work closely with data engineers, analysts, and business stakeholders to deliver scalable and efficient solutions.
Responsibilities:
· Design/rearchitect our infrastructure platform components to be highly available, scalable, reliable, and secure.
· Manage and administer the Looker platform hosted on GCP, including user access, permissions, and security settings.
· Monitor system performance, troubleshoot issues, and optimize Looker for scalability and reliability.
· Implement and maintain Looker’s data connections to various data sources (e.g., BigQuery, Cloud SQL, etc.).
· Ensure compliance with data governance and security policies
· Optimize Looker performance by tuning queries, managing dashboards, and improving data models.
· Collaborate with data engineers to design and implement efficient data pipelines and ETL processes.
· Stay updated with Looker and GCP updates, features, and best practices to ensure the platform is leveraged effectively.
· Ensure Observability is an integral part of the infrastructure platforms and provides adequate visibility about their health, utilization, and cost.
· Implement IaC using tools like Terraform/Terragrunt
· Build tools that predicts saturations/failures and takes preventive actions through automation.
· Collaborate extensively with cross functional teams to understand their requirements; educate them through documentation/trainings and improve the adoption of the platforms/tools.
Qualifications:
· 8+ years of experience in building large scale distributed system in an always available production environment.
· 3+ years of experience administering Looker in a GCP environment.
· Knowledge of scripting languages (e.g., Python, JavaScript) is a plus.
· Strong understanding of GCP services, particularly Big Query, Cloud SQL, and Data Studio.
· Experience with SQL, data modeling, and ETL processes.
· Familiarity with BI tools and data visualization best practices.
· Knowledge of data governance, security, and compliance standards.
· Experience with Data Pipeline tools like Dataflow, Kafka or Pub/Sub and Kubernetes and its ecosystem
· Experience in Python, Shell/Bash Scripting, or a similar language.
· Hand-on experience with observability platforms/tools like Grafana/Prometheus.
· Experience in coaching and mentoring junior engineers; strong verbal and written communications.
· GCP certification would be added advantage
· Bachelor’s degree in computer science or equivalent.
Compensation will vary based on geographical location (see below) within the United States. Individual pay is determined by the candidate's location of residence and multiple factors, including job-related skills, experience, and education.
For more information on our benefits click here.
There are different ranges applied to specific locations. The average base pay range (or OTE range for sales) in the U.S. for the position is listed below.
San Francisco Bay Area Only:
156,400.00 - 265,700.00 USD AnnualAll Other Locations:
136,000.00 - 231,000.00 USD AnnualCutover
CrowdStrike
Cgi
Nagarro
Silae