Aurora hires talented people with diverse backgrounds who are ready to help build a transportation ecosystem that will make our roads safer, get crucial goods where they need to go, and make mobility more efficient and accessible for all. We are seeking a talented and experienced Software Engineer to join our data engineering and infrastructure team. In this role, you will be a key contributor to the design, development, and maintenance of our data platform, building the scalable and reliable systems that enable our organization to leverage data for insights and product innovation. You will work on the core data lake and data warehouse ecosystem infrastructure, data pipelines, and tools that process data at massive scale, ensuring it is accessible, high-quality, and secure.
In this role, you will
- Design, build, and maintain robust and scalable data pipelines and ETL/ELT processes to ingest, transform, and load data from various sources into our data warehouse.
- Develop and manage data infrastructure components using AWS cloud services and infrastructure-as-code tools like Terraform.
- Collaborate with data scientists, analysts, autonomy engineering teams and product teams to understand their data needs and build solutions that meet their requirements.
- Optimize data processing systems for performance, reliability, and cost-efficiency.
- Implement monitoring, alerting, and logging for data pipelines and infrastructure to ensure operational stability.
- Champion best practices in data governance, data quality, and security.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- 3+ years of professional experience in software engineering, with a focus on data-related projects.
- Proficiency in at least one programming language commonly used for data engineering (e.g., Python, Go or C++).
- Solid experience with big data processing frameworks like Apache Spark, Flink, Kinesis Data Stream, or similar technologies.
- Hands-on experience with cloud platforms (AWS, GCP, or Azure) and their data services (e.g., S3, Redshift, BigQuery, Glue).
- Strong knowledge of SQL and experience working with relational and NoSQL databases.
- Intermediate knowledge of data analytics infrastructure, including data transformation tools such as DBT and visualization frameworks and tools
- Experience with building and managing data pipelines using an orchestrator like Apache Airflow.
- Able to systematically approach open-ended questions to identify pragmatic data solutions that scale
- Able to work effectively in a highly cross-functional, fast-moving and high-stakes environment
- Proven ability to communicate technical, data-driven solutions to both technical and non-technical audiences across stakeholders
Desirable Qualifications
- Experience with data warehousing solutions like Snowflake or data lake architectures.
- Familiarity with modern data stack tools and practices.
- A passion for building elegant, scalable, and maintainable systems.
- Experience using Amazon Web Services (AWS) tools
The base salary wage range for this position is $139,000 - $223,000 per year. Aurora’s pay ranges are determined by role, level, and location. Within the range, the successful candidate’s starting base pay will be determined based on factors including job-related skills, experience, qualifications, relevant education or training, and market conditions. These ranges may be modified in the future. The successful candidate will also be eligible for an annual bonus, equity compensation, and benefits.
#LI-SP1
#Entry-Level