Principal Software Engineer – Core Meta Data Platform
Menlo Park, California, United States
UpLift operates at the intersection of travel, technology and alternative lending. Founded by luminaries from the travel industry, UpLift is a well financed startup that provides point-of-sale financing for consumer travel. The consumer travel segment is a $1 Trillion global market that is under-financed relative to other key consumer verticals. UpLift is generating revenue and rapidly scaling, and yet is still small enough for the right candidate to make their mark in a fundamental way on the company.
UpLift is seeking a highly motivated and versatile Principal Engineer to build a scalable Meta data platform to help understand every data set. This platform will serve a single true source of Meta data. UpLift maintains a relentless focus on the data meta platform to serve data catalogs for our systems. As part of the data engineering team, you will directly work with the data product team to have a direct impact on how product decisions are made.
- Build a meta data platform, that interfaces with other data platforms to store data management attributes & capabilities.
- Develop data management platform that allows data set registration or capture. This platform will know every data set with capabilities like data retention, data policies, data ownership etc.
- Develop APIs to interact with data platform to register data Meta data.
- Data management platform will serve both business and technical Meta data needs.
- Build a platform to allow search on any data sets
- Roll out an enterprise wide data governance framework, with a focus on improvement of data quality and the protection of sensitive data through modifications to organization behavior policies and standards, principles, governance metrics, processes, related tools and data architecture
- Build it and own it
- Experience developing and maintaining a core Meta Data platform
- Experience with data at gigantic scale, and an understanding of the growing pains that come along with it
- Experience designing, building and maintaining data pipelines in multi-cloud infrastructure (AWS and GCP)
- Experience with large-scale data warehousing and analytics projects, including using AWS technologies – Redshift, DynamoDB, RDS, Lambda, S3, EC2, etc.
- Expert Linux/UNIX
- Experience with AWS
- Experience designing and developing big data processing systems optimized for scaling
- Significant coding experience in one or more programming languages, libraries, tools, serverless applications and workflows( java, REST, Node Js, Postgres, Hbase, Spark, redis)
Character and Qualities that will help you succeed
- You have a growth mindset
- You’re passionate about deeper learning
- You’re a self starter & team player
- You do the right things and you do it the right way
- You have good interpersonal and communication skills
A few more things to know:
Ready for a new challenge and make an impact? You will need to be comfortable working in the most agile of environments. Requirements might change and sometimes they may be vague. Iterations will be rapid. You will need to be nimble and take smart risks.
- Full-time salary and equity
- Health and dental insurance
- Easy commute across from Menlo Park Caltrain station