Sr. Data Architect, Elk Grove, CA (Hybrid – 3 days a week on-site)
Posted at: 01/06/2026
Elk Grove, CA
- IT - Niche - Direct Placement - Job ID: 26-00057
Title: Sr. Data Architect
Locations: Elk Grove, CA (Hybrid – 3 days a week on-site)
Duration: Direct Hire/FTE
Compensation: $160,000 – $175,000 base +20% Bonus and excellent benefits
Work Requirements: US Citizen, GC Holders, or Authorized to Work in the U.S.
POSITION RESPONSIBILITIES
The Senior Data Architect is a senior technical leader responsible for building and optimizing a robust data platform in the automotive industry. In this full-time role, you will lead a team of data engineers and own the end-to-end architecture and implementation of the Databricks Lakehouse platform. You will collaborate closely with function leaders, domain analysts and other stakeholders to design scalable data solutions that drive business insights. This position demands deep expertise in Databricks (GCP), and ability to build end-to-end data pipelines that handle large volumes of structured, semi structured and unstructured data. You will demonstrate strong leadership to ensure best practices in data engineering, performance tuning, and governance. You will be expected to communicate complex technical concepts and data strategies to technical and non-technical audiences including executive leadership.
KEY RESPONSIBILITIES
- Lead, mentor, and manage a team of data engineers, providing technical guidance, code reviews, and foster a high-performing team.
- Own the Databricks platform architecture and implementation, ensuring the environment is secure, scalable, and optimized for the organization’s data processing needs. Design and oversee the Lakehouse architecture leveraging Delta Lake and Apache Spark.
- Implement and manage Databricks Unity Catalog for unified data governance. Ensure fine-grained access controls and data lineage tracking are in place to secure sensitive data.
- Collaborate with analytics teams to develop and optimize Databricks SQL queries and dashboards. Tune SQL workloads and caching strategies for faster performance and ensure efficient use of the query engine.
- Lead performance tuning initiatives. Profile data processing code to identify bottlenecks and refactor for improved throughput and lower latency. Implement best practices for incremental data processing with Delta Lake, and ensure compute cost efficiency (e.g., by optimizing cluster utilization and job scheduling).
- Work closely with domain analysts, data scientists and product owners to understand requirements and translate them into robust data pipelines and solutions. Ensure that data architectures support analytics, reporting, and machine learning use cases effectively.
- Integrate Databricks workflows into the CI/CD pipeline using DevOps principles and Git. Develop automated deployment processes for notebooks and jobs to promote consistent releases. Manage source control for Databricks code (using GitLab) and collaborate with DevOps engineers to implement continuous integration and delivery for data projects.
- Collaborate with security and compliance teams to uphold data governance standards. Implement data masking, encryption, and audit logging as needed, leveraging Unity Catalog and GCP security features to protect sensitive data.
- Stay up to date with the latest Databricks features and industry’s best practices. Proactively recommend and implement improvements (such as new performance optimization techniques or cost-saving configurations) to continuously enhance the platform’s reliability and efficiency.
SUPERVISORY RESPONSIBILITIES
- No direct reports.
EDUCATION AND EXPERIENCE
- 10+ years of experience in data engineering, data architecture, or related roles, with a track record of designing and deploying data pipelines and platforms at scale.
- Significant hands-on experience with Databricks (preferably GCP) and the Apache Spark ecosystem. Proficient in building data pipelines using PySpark/Scala and managing data in Delta Lake format.
- Strong experience working with cloud data platforms (GCP preferred, or AWS/Azure). Familiarity with GCP Storage principles.
- Strong skills in vector databases and embedding models to support scalable RAG systems. Proficient in optimizing retrieval and indexing for LLM integration.
- Strong experience in managing structured, semi structured and unstructured data in Databricks.
- Ability to inspect existing data pipelines, discern their purpose and functionality, and re-implement them efficiently in Databricks.
- Advanced SQL skills with the ability to write and optimize complex queries. Solid understanding of data warehousing concepts and performance tuning for SQL engines.
- Proven ability to optimize ETL jobs for performance and cost efficiency. Experience tuning cluster configurations, parallelism, and caching to improve job runtimes and resource utilization.
- Demonstrated experience implementing data security and governance measures. Comfortable configuring Unity Catalog or similar data catalog tools to manage schemas, tables, and fine-grained access controls. Able to ensure compliance with data security standards and manage user/group access to data assets.
- Experience leading and mentoring engineering teams. Excellent project leadership abilities to coordinate multiple projects and priorities. Strong communication skills to effectively collaborate with cross-functional teams and present architectural plans or results to stakeholders.
- Experience working in an Agile environment.
Tools & Technologies
- Databricks Lakehouse Platform: Databricks Workspace, Apache Spark, Delta Lake, Databricks SQL, MLflow (for model tracking), Postgres Database.
- Data Governance: Databricks Unity Catalog for data catalog and access control.
- Programming & Data Processing: PySpark and Python for building data pipelines and Spark Jobs; SQL for querying .
- Cloud Services: GCP Cloud Storage, GCP Pub/Sub technologies and Vector Databases.
- DevOps & CI/CD: Git for version control (GitLab), Jenkins and experience with Terraform for infrastructure-as-code is a plus.
- Other Tools: Project and workflow management tools JIRA and confluence. Looker Studio and PowerBI.
PREFERRED CERTIFICATES – PREFERRED EXPERIENCE
Preferred
- Databricks Certified Data Engineer Professional or Databricks Certified Data Engineer Associate.
- Exposure to related big data and streaming tools such as Apache Kafka, GCP Pub/Sub services, Apache Airflow and BI/analytics tools (e.g., Power BI, Looker Studio) is advantageous.
SKILLS and ABILITIES – Functional Capabilities
- Strong technical skills
- Teamwork and collaboration
- Judgmental and decision making skills
- Excellent communication to technical and non-technical audience
- Leadership and creativity
- Striving for greater good of customers and enterprise
- Language & Technology MUSTS: Java/C#, Message Brokering, SQL/NoSQL, Angular/REACT
- Architecture and Patterns: SPA, Microservices, Event Driven, DDD
COMPETETENCIES
- Puts Customers First
- Provides WOW! Customer service every time, every where
- Understands customer needs and solves their problem
- Shows sense of urgency in correctly meeting customer needs
- Team Player
- Is a reliable and supportive team member
- Steps in and assumes leadership roles when needed
- Communicates Effectively
- Communicates in a clear, straightforward, respectful way
- Results the Right Way
- Does what it takes to do the job right (WITTDTJR)
- Development Focused
- Asks for and embraces feedback
- Embraces Change
- Understands and is open to change
>>> Qualified candidates, please apply to this posting directly. You can also email your updated resume to Sgiovati@inspyrsolutions.com for further review.
About INSPYR Solutions
Technology is our focus, and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients’ business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, projects, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at inspyrsolutions.com.
INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.
Information collected and processed through your application with INSPYR Solutions (including any job applications you choose to submit) is subject to INSPYR Solutions’ Privacy Policy and INSPYR Solutions’ AI and Automated Employment Decision Tool Policy: https://www.inspyrsolutions.com/policies/. By submitting an application, you are consenting to being contacted by INSPYR Solutions through phone, email, or text.
26-00057
MORE OPPORTUNITIES
APPLY NOW
TAKE THE NEXT STEP.